id
stringlengths 10
10
| title
stringlengths 26
192
| abstract
stringlengths 172
1.92k
| authors
stringlengths 7
591
| published_date
stringlengths 20
20
| link
stringlengths 33
33
| markdown
stringlengths 269
344k
|
---|---|---|---|---|---|---|
2302.03881 | On Generalized Degree Fairness in Graph Neural Networks | Conventional graph neural networks (GNNs) are often confronted with fairness
issues that may stem from their input, including node attributes and neighbors
surrounding a node. While several recent approaches have been proposed to
eliminate the bias rooted in sensitive attributes, they ignore the other key
input of GNNs, namely the neighbors of a node, which can introduce bias since
GNNs hinge on neighborhood structures to generate node representations. In
particular, the varying neighborhood structures across nodes, manifesting
themselves in drastically different node degrees, give rise to the diverse
behaviors of nodes and biased outcomes. In this paper, we first define and
generalize the degree bias using a generalized definition of node degree as a
manifestation and quantification of different multi-hop structures around
different nodes. To address the bias in the context of node classification, we
propose a novel GNN framework called Generalized Degree Fairness-centric Graph
Neural Network (Deg-FairGNN). Specifically, in each GNN layer, we employ a
learnable debiasing function to generate debiasing contexts, which modulate the
layer-wise neighborhood aggregation to eliminate the degree bias originating
from the diverse degrees among nodes. Extensive experiments on three benchmark
datasets demonstrate the effectiveness of our model on both accuracy and
fairness metrics. | Zemin Liu, Trung-Kien Nguyen, Yuan Fang | 2023-02-08T05:00:37Z | http://arxiv.org/abs/2302.03881v2 | # On Generalized Degree Fairness in Graph Neural Networks
###### Abstract
Conventional graph neural networks (GNNs) are often confronted with fairness issues that may stem from their input, including node attributes and neighbors surrounding a node. While several recent approaches have been proposed to eliminate the bias rooted in sensitive attributes, they ignore the other key input of GNNs, namely the neighbors of a node, which can introduce bias since GNNs hinge on neighborhood structures to generate node representations. In particular, the varying neighborhood structures across nodes, manifesting themselves in drastically different node degrees, give rise to the diverse behaviors of nodes and biased outcomes. In this paper, we first define and generalize the degree bias using a generalized definition of node degree as a manifestation and quantification of different multi-hop structures around different nodes. To address the bias in the context of node classification, we propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (Deg-FairGNN). Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts, which modulate the layer-wise neighborhood aggregation to eliminate the degree bias originating from the diverse degrees among nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics.
## 1 Introduction
Graph neural networks (GNNs) [18, 16, 23] have emerged as a powerful family of graph representation learning approaches. They typically adopt a message passing framework, in which each node on the graph aggregates information from its neighbors recursively in multiple layers, unifying both node attributes and structures to generate node representations. Despite their success, GNNs are often confronted with fairness issues stemming from input node _attributes_[13, 14]. To be more specific, certain sensitive attributes (_e.g._, gender, age or race) may trigger bias and cause unfairness in downstream applications. Naively removing the sensitive attributes does not adequately improve fairness, as the correlation between sensitive attributes and other attributes could still induce bias [2]. Thus, prior work on Euclidean data often employs additional regularizations or constraints to debias the model or post-processing the predictions [1, 15]. Similar principles have been extended to graphs to eliminate bias rooted in sensitive node attributes [2, 16, 17, 18, 19]. For example, CFC [18] tries to debias by applying some filters on the generated node representations, and some others resort to adversarial learning to detect sensitive attributes [18, 19].
However, these approaches fail to account for the other key input of GNNs--the neighborhood structures of each node, which can also induce bias into node representations. Specifically, each node is linked to different neighbors, forming diverse neighborhood structures and manifesting drastically different node degrees. By virtue of the message passing framework, GNNs generate the representation of a node based on its neighborhood structures, which is highly dependent on the abundance or scarcity of its neighboring nodes. Hence, drastic differences in node degrees could lead to differential node behaviors and biased outcomes. In particular, while the "value" of a node is usually co-determined by its own attributes and its neighboring structure (_i.e._, "social capital"), the latter may systematically bias against new-comers despite their individual attributes, and established individuals may passively accumulate more social capital and become even harder for newcomers to compete. For example, on a social network involving NBA players [19], a player with more followers may command a higher salary than another player with comparable physical ability and skills due to his popularity among fans; in a citation network, a paper with more initial citations may attract even more future citations than another paper with comparable quality on the same topic. Generally, a node of larger degree is more likely to possess crucial advantages and thus receives more favorable outcomes than warranted. This reveals a notable issue of _degree fairness_, in which the degree-biased neighborhood structures can marginalize or even override the quality and competency of individual nodes. The goal of degree fairness is to achieve equitable
outcomes for nodes of different degrees, such that individuals of comparable competency should receive similar outcomes. More generally, we can also consider the degree bias stemming from multi-hop structures surrounding a node, using a _generalized_ definition of node degrees.
In this paper, we investigate the important problem of _degree fairness in graph neural networks_ in the context of node classification, which has not been explored to date. On one hand, simply employing neighborhood sampling cannot address this issue due to the potential correlation between node attributes and their structures, and may further result in information loss [22, 23, 24]. On the other hand, node degree is a manifestation of neighborhood structures, which is fundamentally different from a sensitive node attribute [23, 24, 25]. Particularly, in GNNs each node receives information from its neighbors, and thus nodes of diverse degrees can access varying amount of information, giving rise to the degree bias in learnt representations. Moreover, the bias may be further amplified by multi-layered recursive neighborhood aggregation in GNNs. While existing debiasing approaches that directly filter node representations are plausible for attribute-oriented fairness, they do not debias the core operation of neighborhood aggregation and thus cannot fundamentally alleviate the degree bias intrinsic to the diverse neighborhood structures.
Toward degree fairness, we first introduce generalized degree to quantify the abundance or scarcity of multi-hop contextual structures for each node, since in a broader sense the degree bias of a node stems not only from its one-hop neighborhood, but also its local context within a few hops. We further propose a novel generalized **Deg**ree **Fair**ness-centric **GNN** framework named DegFairGNN, which can work with any neighborhood aggregation-based GNN architecture. To fundamentally address degree fairness, we employ a learnable debiasing function to generate different _debiasing contexts_, which are meant to balance the structural contrast between nodes with high and low generalized degrees. Specifically, the debiasing contexts directly target the neighborhood aggregation operation in each GNN layer, aiming to complement the neighborhood of low-degree nodes, while distilling the neighborhood of high-degree nodes. As a result, the drastic differences in contextual structures across nodes can be balanced to reach a structurally fair state, enabling the generation of fair node representations.
To summarize, our contributions are three-fold. (1) We make the first attempt on defining and addressing generalized degree fairness in GNNs. (2) We propose a novel generalized degree fairness-centric GNN framework named DegFairGNN that can flexibly work with neighborhood aggregation-based GNNs, to eliminate the generalized degree bias rooted in the layer-wise neighborhood aggregation. (3) Extensive experiments demonstrate that our proposed model is effective in both fairness and accuracy metrics.
## 2 Problem Formulation
In this section, we introduce the definition of generalized degree fairness on graphs and several related concepts.
**Graph.** A graph is given by \(\mathcal{G}=\{\mathcal{V},\mathcal{E},\mathbf{X}\}\), where \(\mathcal{V}\) is the set of nodes, \(\mathcal{E}\) is the set of edges, and \(\mathbf{X}\in\mathbb{R}^{|\mathcal{V}|\times d_{X}}\) is the node attribute matrix with \(d_{X}\) equalling the number of attributes. Equivalently, \(\mathbf{x}_{v}\in\mathbb{R}^{d_{X}}\) is the attribute vector of node \(v\). Let \(\mathbf{A}\in\mathbb{R}^{|\mathcal{V}|\times|\mathcal{V}|}\) denote the corresponding adjacency matrix of \(\mathcal{G}\).
**Generalized degree.** In a broader sense, the degree bias on a node stems not only from its (one-hop) neighborhood, but also its local context within a certain number of hops. Formally, the \(r\)-hop _local context_ of node \(v\) is defined as \(\mathcal{C}_{r}=\{v^{\prime}\in\mathcal{V}\mid d(v,v^{\prime})\leq r\}\), _i.e._, the set of nodes reachable from \(v\) in at most \(r\) steps, where \(d(\cdot,\cdot)\) is the shortest distance from \(v\) to \(v^{\prime}\) on the graph. Subsequently, we introduce _generalized degree_ to quantify the abundance of multi-hop local context surrounding a node. Given a node \(v\), we define its \(r\)-hop generalized degree \(\deg_{r}(v)\in\mathbb{R}\) as
\[\deg_{r}(v)=[\mathbf{A}^{r}\mathbf{1}]_{v}, \tag{1}\]
where \(r\in\mathbb{N}^{+}\) is the number of hops, \(\mathbf{1}\in\mathbb{R}^{|\mathcal{V}|}\) is a column vector filled with ones, and \([\mathbf{x}]_{i}\) denotes the \(i\)-th entry in vector \(\mathbf{x}\). Here \([\mathbf{A}^{r}]_{i,j}\), the \((i,j)\) entry in the matrix \(\mathbf{A}^{r}\), gives the number of walks of length \(r\) from node \(i\) to node \(j\). Thus, \(\deg_{r}(v)\) represents the total number of (non-distinct) nodes reachable from \(v\) in \(r\) hops, a straightforward measure of the abundance of \(v\)'s \(r\)-hop local context. As a special case, \(\deg_{1}(v)\) is simply the (1-hop) degree of \(v\).
**Generalized degree fairness.** Generalized degree fairness amounts to node representations that are free from variant generalized degrees. In the context of multi-class node classification, we formalize two fairness definitions in terms of statistical parity [1] and equal opportunity [1]. Both definitions rely on the notion of generalized degree groups, in which we divide the nodes into \(m\) mutually exclusive groups \(\mathcal{G}_{1},\ldots,\mathcal{G}_{m}\) so that
\[\mathcal{G}_{i}=\{v\in\mathcal{V}\mid d_{i}\leq\deg_{r}(v)<d_{i+1}\}, \tag{2}\]
where \(d_{1}<d_{2}<\ldots<d_{m+1}\) are a series of degree boundaries between the groups, and \(d_{1}=\min_{v\in\mathcal{V}}\deg_{r}(v)\) and \(d_{m+1}=\max_{v\in\mathcal{V}}\deg_{r}(v)+1\). Furthermore, let \(\mathcal{Y}\) be the set of classes. For ease of discussion, hereafter we may simply call "generalized degree" as "degree".
To achieve _Degree Statistical Parity_ (DSP), we require the class predictions to be independent of node degree groups. That is, for any class \(y\in\mathcal{Y}\) and any two groups \(\mathcal{G}_{i}\) and \(\mathcal{G}_{j}\),
\[P(\hat{y}_{v}=y|v\in\mathcal{G}_{i})=P(\hat{y}_{v}=y|v\in\mathcal{G}_{j}), \tag{3}\]
where \(\hat{y}_{v}\) is the predicted class of \(v\). To achieve _Degree Equal Opportunity_ (DEO), we require the probability of each node being predicted into its true class \(y\) to be equal for nodes in different degree groups. That is,
\[P(\hat{y}_{v}=y|y_{v}=y,v\in\mathcal{G}_{i})=P(\hat{y}_{v}=y|y_{v}=y,v\in \mathcal{G}_{j}),\]
where \(y_{v}\) is the true class of \(v\).
Our degree fairness is defined in a group-wise manner, which is in line with existing fairness definitions [19, 19] and offers a flexible setup. On the one hand, a simple and practical scenario could involve just two groups, since typically the degree fairness issue is the most serious between nodes with
the smallest and largest degrees. On the other hand, the most fine-grained groups would be each unique value of degree forming its own group, but it is not ideal for two reasons. First, some groups may have very few nodes depending on the degree distribution, leading to large variance. Second, the smoothness of degrees are lost: nodes of similar degrees are expected to have rather similar advantage or disadvantage compared to nodes of very different degrees.
## 3 Proposed Model: DegFairGNN
In this section, we introduce the proposed DegFairGNN, starting with some preliminaries on GNNs.
### Preliminaries on GNNs
Modern GNNs usually resort to multi-layer neighborhood aggregation, in which each node recursively aggregates information from its neighbors. Specifically, in the \(l\)-th layer the representation of node \(v\), \(\mathbf{h}_{v}^{l}\in\mathbb{R}^{dl}\), is constructed as
\[\mathbf{h}_{v}^{l}=\sigma\left(\textsc{Aggr}\left(\{\mathbf{h}_{u}^{l-1}\mid u \in\mathcal{N}_{v}\};\omega^{l}\right)\right), \tag{4}\]
where \(d_{l}\) is the dimension of node representations in the \(l\)-th layer, \(\textsc{Aggr}(\cdot)\) denotes an aggregation function such as mean-pooling [13] or self-attention [10], \(\sigma\) is an activation function, \(\mathcal{N}_{v}\) denotes the set of neighbors of \(v\), and \(\omega^{l}\) denotes the learnable parameters in layer \(l\). Node representations in the input layer are given by the input node attributes, _i.e._, \(\mathbf{h}_{v}^{0}\equiv\mathbf{x}_{v}\).
### Overall Framework
The overall framework of DegFairGNN is illustrated in Fig. 1. We first design a strategy of _structural contrast_ in Fig. 1(b), to split nodes into two groups by their degree. The two groups contrast each other to facilitate the learning of a _debiasing function_, which generates node-specific _debiasing contexts_ to modulate neighborhood aggregation, as shown in Fig. 1(c). More specifically, the modulation aims to distill high-degree nodes to remove bias, and complement low-degree nodes to enrich their degree-biased neighborhood structures. Thus, by trying to balance the two groups, the modulation is able to debias neighborhood aggregation. Finally, the debiasing function and GNN are jointly optimized by the task loss and the fairness loss in Fig. 1(d).
### Structural Contrast
To debias nodes with varying contextual structures, we propose to learn the debiasing function by contrasting between two groups of nodes, namely, low-degree nodes \(\mathcal{S}_{0}\) and high-degree nodes \(\mathcal{S}_{1}\), as illustrated in Fig. 1(b). Note that in the layer-wise neighborhood aggregation in Eq. (4), each node can only access its one-hop local contexts in each layer. Thus, it is a natural choice to construct the two groups based on one-hop degree: \(\mathcal{S}_{0}=\{v\in\mathcal{V}\mid\deg_{1}(v)\leqslant K\}\) and \(\mathcal{S}_{1}=\mathcal{V}\setminus\mathcal{S}_{0}\). That is, \(\mathcal{S}_{0}\) is the low-degree group, \(\mathcal{S}_{1}\) is the high-degree group, and \(K\) is a threshold hyperparameter. Note that having two groups can provide the most significant contrast for model training.
To contrast between the two groups, nodes in the same group would share the learnable parameters for the debiasing mechanism, whereas nodes across different groups would have different parameters. This strategy enables the two groups to eliminate the degree bias in different ways, which is desirable given their different degrees.
### Debiasing Neighborhood Aggregation
To fundamentally eliminate degree bias, we propose to debias neighborhood aggregation, the key operation in GNNs in which degree bias is rooted. As shown in Fig. 1(c), we leverage a learnable debiasing function to generate the debiasing contexts, which modulate the neighborhood aggregation for each node and in each layer of the GNN encoder.
**Debiasing function.** More concretely, we leverage a debiasing function \(\mathcal{D}(\cdot)\) with learnable parameters to fit each group, due to the divergence between the two groups in terms of node degree. On one hand, for a low-degree node \(u\in\mathcal{S}_{0}\), \(\mathcal{D}(u;\theta_{0}^{l})\in\mathbb{R}^{d_{l}}\) outputs the debiasing context for \(u\) in layer \(l\) to complement the neighborhood of \(u\). On the other hand, for a high-degree node \(v\in\mathcal{S}_{1}\), \(\mathcal{D}(v;\theta_{1}^{l})\in\mathbb{R}^{d_{l}}\) outputs the debiasing context for \(v\) in layer \(l\) to distill the neighborhood of \(v\). Note that each group \(\mathcal{S}_{*}\) has its own parameters \(\theta_{*}^{l}\) for \(\mathcal{D}(\cdot)\) in layer \(l\). The learning is guided by a _fairness loss_ in the overall objective (see Sect. 3.5), which drives the debiasing contexts toward distilling information on the high-degree nodes while complementing information on the low-degree nodes, in order to achieve the balance and fairness between the two groups.
Figure 1: Overall framework of DegFairGNN.
The debiasing function is designed to generate a debiasing context for each node in each layer, to modulate the neighborhood aggregation in a node- and layer-wise manner. To achieve ideal modulations, the debiasing contexts should be _comprehensive_ and _adaptive_.
_Comprehensiveness._ First, debiasing contexts need to be comprehensive, to account for both the content and structure information in the neighborhood. To be comprehensive, we resort to the _context embedding_ of node \(v\) in layer \(l\), denoted \(\mathbf{c}_{v}^{l}\in\mathbb{R}^{d_{l-1}}\), which can be calculated as
\[\mathbf{c}_{v}^{l}=\textsc{Pool}(\{\mathbf{h}_{u}^{l-1}\mid u\in\mathcal{C}_{ r}(v)\}), \tag{5}\]
where \(\textsc{Pool}(\cdot)\) is a pooling function. Here we use a simple mean pooling, although it can also be made learnable. The context embedding \(\mathbf{c}_{v}^{l}\) aggregates the layer-(\(l\)-1) contents in node \(v\)'s \(r\)-hop local context \(\mathcal{C}_{r}(v)\), and thus embodies both the content and structure information. The debiasing function is then a function of the context embedding, _i.e._,
\[\mathcal{D}(v;\theta_{*}^{l})=f(\mathbf{c}_{v}^{l};\theta_{c,*}^{l}), \tag{6}\]
which is parameterized by \(\theta_{c,*}^{l}\), where \(*\) is \(0\) or \(1\) depending on the node group. We use a fully connected layer as \(f\).
_Adaptiveness._ Second, debiasing contexts need to be _adaptive_, to sufficiently customize to each node. While the two groups of nodes, \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\), are already differentiated by group-specific parameters for the debiasing function, the differentiation is coarse-grained and cannot sufficiently adapt to individual nodes. In particular, even for nodes in the same group, their degrees still vary, motivating the need for finer-grained node-wise adaptation. However, letting each node have node-specific parameters is not feasible due to a much larger model size, which tends to cause overfitting and scalability issues. For fine-grained adaptation without blowing up the model size, inspired by hypernetworks [11, 12, 13], we adopt a secondary neural network to generate node-wise transformations on the debiasing contexts in each layer. Concretely, the debiasing function can be reformulated as
\[\mathcal{D}(v;\theta_{*}^{l})=(\gamma_{v}^{l}+\mathbf{1})\odot f(\mathbf{c}_{v }^{l};\theta_{c,*}^{l})+\beta_{v}^{l}, \tag{7}\]
where \(\gamma_{v}^{l}\) and \(\beta_{v}^{l}\in\mathbb{R}^{d_{l}}\) are node-specific scaling and shifting operators in layer \(l\), respectively. Here \(\odot\) denotes element-wise multiplication, and \(\mathbf{1}\) is a vector of ones to ensure that the scaling is centered around one. Note that \(\gamma_{v}^{l}\) and \(\beta_{v}^{l}\in\mathbb{R}^{d_{l}}\) are not directly learnable, but are respectively generated by a secondary network conditioned on each node's degree. Specifically,
\[\gamma_{v}^{l}=\phi_{\gamma}(\delta^{l}(v);\theta_{\gamma}^{l}),\quad\beta_{v }^{l}=\phi_{\beta}(\delta^{l}(v);\theta_{\beta}^{l}), \tag{8}\]
where \(\phi_{\gamma}\) and \(\phi_{\beta}\) can be any neural network, and we simply use a fully connected layer. The input to these secondary networks, \(\delta^{l}(v)\), is the _degree encoding_ of \(v\) to condition the transformations on the degree of \(v\), which we will elaborate later. Thus, the learnable parameters of the debiasing function in Eq. (7) become \(\theta_{*}^{l}=\{\theta_{c,*}^{l},\theta_{\gamma}^{l},\theta_{\beta}^{l}\}\) in layer \(l\), and \(\theta_{*}=\{\theta_{*}^{1},\theta_{*}^{2},\ldots\}\) denotes the full set of learnable parameters of the debiasing function for nodes in group \(\mathcal{S}_{*}\).
Finally, the input to the secondary network in Eq. (8) is the degree encoding \(\delta^{l}(v)\), instead of the degree of \(v\) itself. The reason is that degree is an ordinal variable, implicating that degree-conditioned functions should be smooth w.r.t. small changes in degree. In other words, nodes with similar degrees should undergo similar transformations, and vice versa. In light of this, inspired by positional encoding [10], we propose to encode the degree of a node \(v\) in layer \(l\) into a vector \(\delta^{l}(v)\in\mathbb{R}^{d_{l}}\), such that \([\delta^{l}(v)]_{2i}=\sin(\deg_{1}(v)/10000^{2i/d_{l}})\) and \([\delta^{l}(v)]_{2i+1}=\cos(\deg_{1}(v)/10000^{2i/d_{l}})\). Note that, although nodes at the periphery of the two groups \(\mathcal{S}_{0}\) and \(\mathcal{S}_{1}\) (_e.g._, nodes with degrees \(K\) and \(K+1\)) have different debiasing parameters (\(\theta_{c,0}\) and \(\theta_{c,1}\)), the usage of degree encoding in Eqs. (7) and (8) will assimilate their debiasing contexts to some extent given their similar degrees.
**Modulated GNN encoder.** Given the debiasing contexts, we couple it with the standard neighborhood aggregation in Eq. (4), and modulate neighborhood aggregation for the two groups differently. Specifically, in layer \(l\),
\[\mathbf{h}_{v}^{l}=\sigma\bigg{(}\textsc{Aggr}(\{\mathbf{h}_{u}^{l -1}\mid u\in\mathcal{N}_{v}\};\omega^{l})\\ +\epsilon\cdot\big{(}\underbrace{I(v\in\mathcal{S}_{0})\mathcal{ D}(v;\theta_{0}^{l})}_{\text{complement low-deg. group}}+\underbrace{I(v\in\mathcal{S}_{1})\mathcal{D}(v;\theta_{1}^{l})}_{\text{ distill high-deg. group}}\big{)}\bigg{)}, \tag{9}\]
where \(I(\cdot)\) is a 0/1 indicator function based on the truth value of its argument, and \(\epsilon>0\) is a hyperparameter to control the impact of the debiasing contexts.
### Training Constraints and Objective
We focus on the task of node classification. Apart from the classification loss and the fairness loss associated with the classification, we further introduce several useful constraints to improve the training process, as shown in Fig. 1(d).
**Classification loss.** For node classification, the last layer of the GNN typically sets the dimension to the number of classes, and uses a softmax activation such that the \(i\)-th output dimension is the probability of class \(i\). With a total of \(\ell\) layers, the output node representations \(\{\mathbf{h}_{v}^{l}:v\in\mathcal{V}^{\text{tr}}\}\) of the training nodes \(\mathcal{V}^{\text{tr}}\) can be fed into the cross-entropy loss,
\[\mathcal{L}_{1}=-\sum_{v\in\mathcal{V}^{\text{tr}}}\sum_{y=1}^{|\mathcal{V}|}[ \mathbf{y}_{v}]_{y}\ln[\mathbf{h}_{v}^{\ell}]_{y}, \tag{10}\]
where \(\mathbf{y}_{v}\) is the one-hot vector of \(v\)'s class.
**Fairness loss.** As the proposed debiasing function in Eq. (7) is learnable, it should be driven toward fair representations. Thus, we further employ a fairness loss on the training data. Let \(\mathcal{S}_{0}^{\text{tr}}\) and \(\mathcal{S}_{1}^{\text{tr}}\) denote the group of low- and high-degree nodes in the training data, respectively. We use a DSP-based loss, trying to achieve parity in the predicted probabilities for the two groups, as follows.
\[\mathcal{L}_{2}=\Big{\|}\frac{1}{|\mathcal{S}_{0}^{\text{tr}}|}\sum_{v\in \mathcal{S}_{0}^{\text{tr}}}\mathbf{h}_{v}^{\ell}-\frac{1}{|\mathcal{S}_{1}^{ \text{tr}}|}\sum_{v\in\mathcal{S}_{1}^{\text{tr}}}\mathbf{h}_{v}^{\ell}\Big{\|}_ {2}^{2}, \tag{11}\]
where \(\|\cdot\|_{2}^{2}\) is the \(L_{2}\) norm. This fairness loss drives the learnable debiasing function toward the fairness metric (_e.g._,
DSP here) to guide the training of DegFairGNN, by constraining the debiasing function to learn how to distill information on high-degree nodes and complement information on low-degree nodes, respectively. Besides, it is also possible to apply DEO. Note that, the fairness loss aims to constrain the prediction distribution to be similar across the two groups, yet does not require the node representations to be similar. Thus, DegFairGNN would not worsen the over-smoothing phenomenon [22].
**Constraints on debiasing contexts.** For a low-degree node \(u\), its debiasing context \(\mathcal{D}(u;\theta_{0}^{\prime})\) aims to complement but not distill its neighborhood. On the contrary, for a high-degree node \(v\), its debiasing context \(\mathcal{D}(v;\theta_{1}^{\prime})\) aims to distill but not complement its neighborhood. Thus, both \(\mathcal{D}(\cdot;\theta_{1}^{\prime})\) for low-degree nodes in \(\mathcal{S}_{0}\) and \(\mathcal{D}(\cdot;\theta_{0}^{\prime})\) for high-degree nodes in \(\mathcal{S}_{1}\) should be close to zero. The two constraints promote the learning of debiasing contexts by contrasting between the two groups, which can be formulated as the following loss.
\[\mathcal{L}_{3}=\sum_{l=1}^{\ell}\bigg{(}\sum_{v\in\mathcal{S}_{0}^{\prime}} \|\mathcal{D}(v;\theta_{1}^{\prime})\|_{2}^{2}+\sum_{v\in\mathcal{S}_{1}^{ \prime}}\|\mathcal{D}(v;\theta_{0}^{\prime})\|_{2}^{2}\bigg{)}. \tag{12}\]
**Constraints on scaling and shifting.** To prevent overfitting the data with arbitrarily large scaling and shifting, we further consider the loss below to restrict the search space.
\[\mathcal{L}_{4}=\sum_{l=1}^{\ell}\sum_{v\in\mathcal{V}^{\prime}}(\|\gamma_{v} ^{l}\|_{2}^{2}+\|\beta_{v}^{l}\|_{2}^{2}). \tag{13}\]
**Overall loss.** By combining all the above loss terms, we formulate the overall loss as
\[\mathcal{L}=\mathcal{L}_{1}+\mu\mathcal{L}_{2}+\lambda(\mathcal{L}_{3}+ \mathcal{L}_{4}), \tag{14}\]
where \(\mu,\lambda\) are hyper-parameters. In Appendix A, we outline the training procedure and give a complexity analysis.
## 4 Experiments
In this section, we evaluate the proposed DegFairGNNin terms of both accuracy and fairness.
### Experimental Setup
**Datasets.** We use two Wikipedia networks, _Chameleon_ and _Squirrel_[2], in which each node represents a Wikipedia page, and each edge denotes a reference between two pages in either direction. We split the nodes into five categories w.r.t. their traffic volume for classification. We also use a citation network _EMNLP_[15], in which each node denotes a paper published in the conference, and each edge denotes two papers that have been co-cited. We split the nodes into two categories w.r.t. their out-of-EMNLP citation count for classification. We summarize the datasets in Table 1, and present more details in Appendix B. Note that, both traffic volumes and citation counts can be deemed as individual benefits of the nodes that tend to be biased by the node degree, motivating us to employ these datasets. In these contexts, a GNN that predicts the benefit outcome independently of degree groups is trying to be fair, which should focus on the relevance and quality of the nodes rather than their existing links.
**Base GNNs.** Our proposed DegFairGNN can work with different GNN backbones. We employ GCN [23] as the default base GNN in our experiments, and name the corresponding fairness model DegFairGCN. We also adopt two other base GNNs, GAT [2] and GraphSAGE [10], as described in Appendix C.
**Baselines.** We consider the following two categories of baselines. (1) _Degree-specific models_: DSGCN [19], Residual2Vec [11] and Tail-GNN [12]. They employ degree-specific operations on the nodes w.r.t. their degrees, to improve task accuracy especially for the low-degree nodes. (2) _Fairness-aware models_: FairWalk [12], CFC [13], FairGNN [12], FairAdj [14] and FairVGNN [15]. They are proposed to address the sensitive attribute-based fairness of nodes on graphs. To apply them to degree fairness, we define the generalized node degree as the sensitive attribute. However, even so, these fairness-aware baselines do not debias the neighborhood aggregation mechanism in each GNN layer, and thus fall short of addressing the degree bias from the root. More details of the baselines are given in Appendix D.
**Data split and parameters.** For all the datasets, we randomly split the nodes into training, validation and test set with proportion 6:2:2. We set the threshold \(K\) for the structural contrast in Sect. 3.3 as the mean node degree by default. We further analyze the impact of \(K\) on both accuracy and fairness in Appendix G.2. For other hyper-parameter settings, please refer to Appendix E.
**Evaluation.** For model _performance_, we evaluate the node classification accuracy on the test set.
For model _fairness_, the group-wise DSP and DEO (Sect. 2) essentially require that the outcomes are independent of the degree groups. In particular, we form two groups from the test set: \(\mathcal{G}_{0}\) and \(\mathcal{G}_{1}\), containing test nodes with low and high generalized degrees, respectively. A two-group granule is a first major step toward degree fairness (resource-poor group _vs._ resource-rich group), where the fairness issue is the most serious. As degree usually follows a long-tailed distribution [12], based on the Pareto principle [13] we select the top 20% test nodes by generalized degree as \(\mathcal{G}_{1}\), and the bottom 20% nodes as \(\mathcal{G}_{0}\), which may present more prominent biases. To thoroughly evaluate the fairness, we also test on alternative groups with top and bottom 30% nodes. Moreover, generalized degree fairness is defined w.r.t. a pre-determined number of hops \(r\). We employ \(r=1\) as the default, and further report the evaluation with \(r=2\). We only select \(r\leq 2\) for evaluation as GNNs usually have shallow layers, which implies a small
\begin{table}
\begin{tabular}{l|r r r r} \hline \hline Dataset & Nodes & Edges & Features & Classes \\ \hline Chameleon & 2,277 & 31,371 & 2,325 & 5 (traffic volume) \\ Squirrel & 5,201 & 198,353 & 2,089 & 5 (traffic volume) \\ EMNLP & 2,600 & 7,969 & 8 & 2 (citation count) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of datasets.
can sufficiently cover the contextual structures where biases are rooted. Hence, we adopt the following metrics \(\Delta_{\text{DSP}}\) and \(\Delta_{\text{DEO}}\), which evaluate the mean difference between the distributions of the two groups (\(\mathcal{G}_{1}\) and \(\mathcal{G}_{0}\)) in the test set. For both metrics, a smaller value implies a better fairness.
\[\Delta_{\text{DSP}}=\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}} \big{|}P(\hat{y}_{v}=y|v\in\mathcal{G}_{0})-\] \[P(\hat{y}_{v}=y|v\in\mathcal{G}_{1})\big{|}, \tag{15}\] \[\Delta_{\text{DEO}}=\frac{1}{|\mathcal{Y}|}\sum_{y\in\mathcal{Y}} \big{|}P(\hat{y}_{v}=y|y_{v}=y,v\in\mathcal{G}_{0})-\] \[P(\hat{y}_{v}=y|y_{v}=y,v\in\mathcal{G}_{1})\big{|}. \tag{16}\]
### Model Accuracy and Fairness
We first evaluate our model using the default base GNN (_i.e._, GCN) and fairness settings, and further supplement it with additional fairness settings and base GNNs.
**Main evaluation.** Using default fairness settings, we first evaluate DegFairGCN (_i.e._, on a GCN backbone) against the baselines in Table 2, and make the following observations.
Firstly, DegFairGCN consistently outperforms the baselines in both fairness metrics, while achieving a comparable classification accuracy, noting that there usually exists a trade-off between fairness and accuracy [19, 10]. The only exception is on the Squirrel dataset, where FairWalk obtains the best \(\Delta_{\text{DSP}}\) at the expense of a significant degradation in accuracy, while DegFairGCN remains a close runner-up ahead of other baselines. Secondly, degree-specific models DSGCN, Residual2Vec and Tail-GNN typically have worse fairness than fairness-aware models, since they exploit the structural difference between nodes for model performance, rather than to eliminate the degree bias for model fairness. Thirdly, the advantage of DegFairGCN over existing fairness-aware models including FairWalk, CFC, FairGNN, FairAdj and FairVGNN implies that degree fairness needs to be addressed at the root, by debiasing the core operation of layer-wise neighborhood aggregation. Merely treating degree as a sensitive attribute cannot fundamentally alleviate the structural degree bias. Interestingly, GCN can generally achieve comparable and sometimes even better fairness than these methods, which again shows that degree fairness cannot be sufficiently addressed without directly debiasing neighborhood aggregation.
**Additional fairness settings.** We further evaluate fairness using different test groups, _i.e._, \(r=2\) with 20% top/bottom in Table 3 and \(r=1\) with 30% top/bottom in Table 4. Note that, accuracy evaluation is applied on the whole test set regardless of the groups, so the accuracies are identical to those reported in Table 2. Here, we compare with two representative baselines, and observe that, even with different test groups, our proposed DegFairGNN can generally outperform the baselines in terms of degree fairness. Similar to Table 2, FairWalk can perform better in fairness metrics in Table 4 at the expense of significantly worse accuracy.
**Additional base GNNs.** In addition to the default base GCN, we further experiment with other base GNNs, namely, GAT and GraphSAGE, and form two new models, _i.e._, DegFairGAT and DegFairSAGE, respectively. Their performance under the default fairness settings is reported in Table 5, while the other settings (_i.e._, \(r=2\), 20% Top/Bottom, and \(r=1\), 30% Top/Bottom) can be found in Appendix G.1.
Altogether, the three base GNNs employ different neighborhood aggregations, _e.g._, mean or attention-based mechanisms, but none of them employs a fairness-aware aggregation. The results show that our models can generally outperform their corresponding base GNN models across the three datasets in terms of fairness, demonstrating the flexibility of DegFairGNN when working with different GNN backbones.
\begin{table}
\begin{tabular}{c||c||c|c|c} \hline & GCN & FairWalk & FairGNN & DegFairGCN \\ \hline \hline \multirow{4}{*}{Channel.} & Acc.\(\uparrow\) & 62.45 \(\pm\) 0.21 & 56.36 \(\pm\) 0.75 & 70.70 \(\pm\) 0.52 & 69.91 \(\pm\) 0.19 \\ & \(\Delta_{\text{DSP}}\downarrow\) & 5.96 \(\pm\) 0.89 & 10.38 \(\pm\) 0.85 & 6.70 \(\pm\) 0.32 & **5.25**\(\pm\) 0.39 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 26.92 \(\pm\) 2.09 & 25.46 \(\pm\) 1.66 & 23.66 \(\pm\) 0.93 & **19.05**\(\pm\) 0.74 \\ \hline \multirow{4}{*}{Squirrel} & Acc.\(\uparrow\) & 47.85 \(\pm\) 1.33 & 37.68 \(\pm\) 0.65 & 57.29 \(\pm\) 0.77 & 59.21 \(\pm\) 0.97 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 26.92 \(\pm\) 2.09 & 25.46 \(\pm\) 1.66 & 23.66 \(\pm\) 0.93 & **19.05**\(\pm\) 0.74 \\ \hline \multirow{4}{*}{Squirrel} & Acc.\(\uparrow\) & 47.85 \(\pm\) 1.33 & 37.68 \(\pm\) 0.65 & 57.29 \(\pm\) 0.77 & 59.21 \(\pm\) 0.97 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 10.34 \(\pm\) 2.15 & **6.17**\(\pm\) 0.36 & 9.27 \(\pm\) 0.68 & 7.39 \(\pm\) 0.63 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 22.62 \(\pm\) 3.10 & **14.97**\(\pm\) 1.12 & 17.42 \(\pm\) 1.11 & 17.71 \(\pm\) 1.05 \\ \hline \multirow{4}{*}{EMNLP} & Acc.\(\uparrow\) & 78.92 \(\pm\) 0.43 & 82.23 \(\pm\) 0.18 & **86.81**\(\pm\) 0.22 & 79.92 \(\pm\) 0.77 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 42.87 \(\pm\) 1.40 & 34.19 \(\pm\) 0.91 & 48.82 \(\pm\) 1.97 & **14.46**\(\pm\) 3.35 \\ \cline{1-1} & \(\Delta_{\text{DEO}}\downarrow\) & 37.89 \(\pm\) 3.27 & 34.49 \(\pm\) 0.91 & 48.83 \(\pm\) 1.97 & **10.92**\(\pm\) 2.87 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison to baselines (\(r=1\), 30% Top/Bottom).
\begin{table}
\begin{tabular}{c||c||c|c c c|c c c c c|c c c} \hline & & GCN & DSGCN & Residual2Vec & Tail-GNN & FairWalk & CFC & FairGNN & FairAdj & FairVGNN & DegFairGCN \\ \hline \hline \multirow{4}{*}{Channel.} & Acc.\(\uparrow\) & 62.45 \(\pm\) 0.21 & 63.90 \(\pm\) 1.28 & 49.04 \(\pm\) 0.01 & 66.08 \(\pm\) 0.19 & 56.36 \(\pm\) 0.75 & 63.02 \(\pm\) 0.84 & 70.70 \(\pm\) 0.52 & 51.71 \(\pm\) 1.13 & 72.32 \(\pm\) 0.50 & 69.91 \(\pm\) 0.19 \\ & \(\Delta_{\text{DSP}}\downarrow\) & 9.68 \(\pm\) 1.37 & 8.81 \(\pm\) 1.15 & 14.52 \(\pm\) 0.69 & 8.51 \(\pm\) 1.72 & 8.18 \(\pm\) 0.93 & 10.12 \(\pm\) 1.28 & 7.33 \(\pm\) 1.09 & 9.79 \(\pm\) 1.91 & 8.86 \(\pm\) 1.11 & **5.85**\(\pm\) 0.32 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 36.08 \(\pm\) 2.65 & 25.14 \(\pm\) 2.67 & 37.31 \(\pm\) 1.99 & 26.09 \(\pm\) 3.25 & 22.89 \(\pm\) 2.75 & 29.54 \(\pm\) 1.95 & 26.83 \(\pm\) 1.95 & 27.48 \(\pm\) 2.06 & 26.02 \(\pm\) 2.39 & **21.60**\(\pm\) 0.71 \\ \hline \multirow{4}{*}{Squirrel} & Acc.\(\uparrow\) & 47.85 \(\pm\) 1.33 & 40.71 \(\pm\) 2.17 & 28.47 \(\pm\) 0.01 & 42.62 \(\pm\) 0.06 & 37.68 \(\pm\) 0.65 & 45.64 \(\pm\) 2.19 & 57.29 \(\pm\) 0.77 & 35.18 \(\pm\) 1.22 & 46.97 \(\pm\) 0.48 & 59.51 \(\pm\) 0.97 \\ & \(\Delta_{\text{DEO}}\downarrow\) & 13.37 \(\pm\) 2.83 & 16.08 \(\pm\) 0.86 & 25.11 \(\pm\) 0.48 & 18.91 \(\pm\) 0.26 & **7.94**\(\pm\) 0.36 & 12.40 \(\pm\) 0.48 & 12.96 \(\pm\) 1.03 & 16.63 \(\pm\) 1.56 & 26.67 \(\pm\) 0.52 & 5.94 \(\pm\) 1.02
### Model Analysis
We conduct further model analysis on DegFairGNN, using the default base GNN and fairness settings.
**Ablation Study.** To validate the contribution of each module in DegFairGCN, we consider several variants. (1) _no scale & shift_: we remove the scaling and shifting operations for the debiasing contexts, _i.e._, no node-wise adaptation is involved. (2) _no contrast_: we remove the structural contrast, and utilize a common debiasing function with shared parameters for all the nodes. (3) _no modulation_: we remove the modulation (complementing or distilling) from Eq. (9), resulting in a standard GCN equipped with an additional fairness loss.
We report the results in Fig. 2 and make several observations. Firstly, without scaling and shifting, we often get worse accuracy and fairness, which means node-wise adaptation is useful and the model can benefit from the finer-grained treatment of nodes. Secondly, without structural contrast, the fairness metrics generally become worse. Thus, structural contrast is effective in driving the debiasing contexts toward degree fairness. Lastly, without the modulation of neighborhood aggregation, the fairness metrics become worse in most cases, implying that simply adding a fairness loss without properly debiasing the layer-wise neighborhood aggregation does not work well.
**Other analyses.** We present further analyses on the threshold \(K\), scalability and parameter sensitivity in Appendix G.2, G.3, and G.4, respectively, due to space limitation.
## 5 Related Work
We only present the most related work here, while leaving the rest to Appendix H due to space limitation.
**Fairness learning.** Fairness learning [2, 10] can be broadly categorized into three kinds. (1) Pre-processing methods usually eliminate bias by reshaping the dataset [10], such as correcting the labels [1]. (2) In-processing methods usually rely on model refinement for debiasing, such as applying additional regularizations or constraints [11, 12]. (3) Post-processing methods [10, 12] usually designate new labels on the predictions to remove bias.
Some recent approaches [13, 14, 15, 16] deal with the sensitive attribute-based fairness on graphs. Fair-Walk [13] tries to sample fair paths by guiding random walks based on sensitive attributes, while FairAdj [11] studies dyadic fairness for link prediction by adjusting the adjacency matrix. On the other hand, EDITS [14] proposes to debias the input attributed data so that GNNs can be fed with less biased data, while FairVGNN [14] tries to address the issue of sensitive attribute leakage by automatically identifying and masking correlated attributes. Others [14, 15, 16] employ discriminators [1] as additional constraints on the encoder to facilitate the identification of sensitive attributes. DeBayes [13] trains a conditional network embedding [12] by using a biased prior and evaluates the model with an oblivious prior, thus reducing the impact of sensitive attributes. Ma _et al._[15] investigate the performance disparity between test groups rooted in the distance between them and the training instances. Dong _et al._[14] study the individual fairness for GNNs from the ranking perspective. Besides, there are fairness learning studies on heterogeneous [16] and knowledge graphs [17]. However, none of these works is designed for degree fairness on graphs.
**Degree-specific GNNs.** Some recent studies investigate the influence of degrees on the performance of GNNs. Various strategies have been proposed, such as employing degree-specific transformations on nodes [23, 15], accounting for their different roles [1], balancing the sampling of nodes in random walks [18], or considering the degree-related performance differences between nodes [16, 15, 17]. However, they emphasize the structural difference between nodes in order to improve task accuracy, rather than to eliminate the degree bias for fairness.
## 6 Conclusions
In this paper, we investigated the important problem of degree fairness on GNNs. In particular, we made the first attempt on defining and addressing the generalized degree fairness issue. To eliminate the degree bias rooted in the layer-wise neighborhood aggregation, we proposed a novel generalized degree fairness-centric GNN framework named DegFairGNN, which can flexibly work with most modern GNNs. The key insight is to target the root of degree bias, by modulating the core operation of neighborhood aggregation through a structural contrast. We conducted extensive experiments on three benchmark datasets and achieved promising results on both accuracy and fairness metrics.
\begin{table}
\begin{tabular}{c||c||c c|c c} \hline & & GAT & DegFairGAT & GraphSAGE & DegFairSAGE \\ \hline \hline \multirow{3}{*}{Channel.} & Acc.\(\uparrow\) & \(63.15\pm 0.40\) & \(69.64\pm 0.44\) & \(53.15\pm 0.56\) & \(60.95\pm 0.84\) \\ & \(\Delta_{\text{ESP}}\downarrow\) & \(9.35\pm 1.61\) & \(\mathbf{7.88}\pm 1.30\) & \(10.86\pm 0.74\) & \(\mathbf{8.22}\pm 1.22\) \\ & \(\Delta_{\text{HEO}}\downarrow\) & \(29.59\pm 1.43\) & \(\mathbf{26.12}\pm 2.06\) & \(29.42\pm 1.57\) & \(\mathbf{26.40}\pm 2.32\) \\ \hline \multirow{3}{*}{Squirrel} & Acc.\(\uparrow\) & \(41.44\pm 0.21\) & \(45.45\pm 1.44\) & \(34.39\pm 0.62\) & \(34.63\pm 1.31\) \\ & \(\Delta_{\text{HEO}}\downarrow\) & \(12.60\pm 0.77\) & \(12.63\pm 0.63\) & \(5.39\pm 0.66\) & \(\mathbf{3.76}\pm 0.23\) \\ & \(\Delta_{\text{HEO}}\downarrow\) & \(24.89\pm 0.69\) & \(\mathbf{20.64}\pm 3.06\) & \(17.13\pm 2.86\) & \(\mathbf{14.91}\pm 1.35\) \\ \hline \multirow{3}{*}{EMNLP} & Acc.\(\uparrow\) & \(70.42\pm 0.77\) & \(81.57\pm 1.14\) & \(83.96\pm 0.31\) & \(83.57\pm 0.44\) \\ & \(\Delta_{\text{HEO}}\downarrow\) & \(24.40\pm 3.06\) & \(\mathbf{14.11}\pm 6.28\) & \(56.33\pm 1.12\) & \(\mathbf{28.43}\pm 3.79\) \\ \cline{1-1} & \(\Delta_{\text{HEO}}\downarrow\) & \(\mathbf{8.36}\pm 1.29\) & \(22.28\pm 6.19\) & \(51.71\pm 0.88\) & \(\mathbf{24.65}\pm 3.35\) \\ \hline \end{tabular}
\end{table}
Table 5: With other base GNNs (\(r=1\), 20% Top/Bottom).
Figure 2: Ablation study on the effect of each module.
Acknowledgments
This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funds (Grant No. A20H6b0151).
|
2304.02811 | HomPINNs: homotopy physics-informed neural networks for solving the
inverse problems of nonlinear differential equations with multiple solutions | Due to the complex behavior arising from non-uniqueness, symmetry, and
bifurcations in the solution space, solving inverse problems of nonlinear
differential equations (DEs) with multiple solutions is a challenging task. To
address this, we propose homotopy physics-informed neural networks (HomPINNs),
a novel framework that leverages homotopy continuation and neural networks
(NNs) to solve inverse problems. The proposed framework begins with the use of
NNs to simultaneously approximate unlabeled observations across diverse
solutions while adhering to DE constraints. Through homotopy continuation, the
proposed method solves the inverse problem by tracing the observations and
identifying multiple solutions. The experiments involve testing the performance
of the proposed method on one-dimensional DEs and applying it to solve a
two-dimensional Gray-Scott simulation. Our findings demonstrate that the
proposed method is scalable and adaptable, providing an effective solution for
solving DEs with multiple solutions and unknown parameters. Moreover, it has
significant potential for various applications in scientific computing, such as
modeling complex systems and solving inverse problems in physics, chemistry,
biology, etc. | Haoyang Zheng, Yao Huang, Ziyang Huang, Wenrui Hao, Guang Lin | 2023-04-06T01:20:23Z | http://arxiv.org/abs/2304.02811v2 | HomPINNs: homotopy physics-informed neural networks for solving the inverse problems of nonlinear differential equations with multiple solutions
###### Abstract
Due to the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, solving inverse problems of nonlinear differential equations (DEs) with multiple solutions is a challenging task. To address this issue, we propose homotopy physics-informed neural networks (HomPINNs), a novel framework that leverages homotopy continuation and neural networks (NNs) to solve inverse problems. The proposed framework begins with the use of a NN to simultaneously approximate known observations and conform to the constraints of DEs. By utilizing the homotopy continuation method, the approximation traces the observations to identify multiple solutions and solve the inverse problem. The experiments involve testing the performance of the proposed method on one-dimensional DEs and applying it to solve a two-dimensional Gray-Scott simulation. Our findings demonstrate that the proposed method is scalable and adaptable, providing an effective solution for solving DEs with multiple solutions and unknown parameters. Moreover, it has significant potential for various applications in scientific computing, such as modeling complex systems and solving inverse problems in physics, chemistry, biology, etc.
keywords: Machine learning
Physics-informed neural networks
Nonlinear DEs
Multiple solutions
Homotopy continuation method
## 1 Introduction
Inverse problems are popular in various fields of computational physics, such as fluid dynamics [1], electromagnetics [2], and geophysics [3]. These problems often involve recovering a set of parameters given certain observations or measurements. Traditional techniques for solving inverse problems include optimization-based methods, such as adjoint methods [4], Bayesian inversion [5], and sampling-based methods [6], etc. While the above-mentioned approaches have proven successful in specific research areas, they come with several obstacles, such as high computational costs, a need for significant domain knowledge, or issues with ill-posed problems.
In the last few years, deep learning techniques, specifically physics-informed neural networks (PINNs) [7; 8], have shown potential for solving inverse problems more accurately and efficiently. By integrating prior physical knowledge into neural network (NN) architectures, PINNs enable to solve inverse problems with superior generalization ability and a reduced need for the number of observations. The concept of PINNs was first introduced by Raissi _et al._[8] in a series of papers that demonstrated their potential for solving forward and inverse problems in computational physics. The intuition behind PINNs is the incorporation of prior physical knowledge into the loss function. By minimizing this loss function, the NN learns to approximate the solution to the inverse problem while satisfying the
underlying physics. Following the idea of PINNs and with the help of automatic differentiation, a Python library [9] was developed by Lu _et al._ to solve inverse problems in the area of computational science and engineering.
Since then, PINNs and their variants have been successfully applied to solve inverse problems in many different areas. Building upon PINNs, Pang _et al._[10] developed fractional PINNs (fPINNs) for solving space-time fractional advection-diffusion equations from scattered and noisy data. Kharazmi _et al._[11] proposed hp-Variational PINNs (hp-VPINNs) with domain decomposition to construct both local and global neural network approximations. Liu _et al._[12] introduced Bayesian PINNs (B-PINNs) for solving partial differential equations (PDEs), providing accurate predictions and quantifying aleatoric uncertainty with noisy data. Jagtap _et al._[13] tried to enforce flux continuity and average solution at sub-domain interfaces and proposed conservative PINNs (cPINNs) on discrete domains for nonlinear conservation laws. Jagtap _et al._[14] extended a generalized space-time domain decomposition approach, extended PINNs (XPINNs), for solving nonlinear PDEs on complex-geometry domains. Wang _et al._[15] investigate the limitations of PINNs in approximating high-frequency or multi-scale functions, and propose novel architectures using spatio-temporal and multi-scale random Fourier features to improve the robustness and accuracy of PINNs. Wang _et al._[16] also investigated the training behavior and failures of PINNs using the Neural Tangent Kernel (NTK) framework, identifying a discrepancy in the convergence rate of different loss components, and proposed a novel gradient descent algorithm that adaptively calibrates the convergence rate. Additionally, PINNs have been implemented in various research domains, including fluid dynamics [17; 18], solid mechanics [19], molecular dynamics[20], quantum chemistry [21], etc. More papers related to solve inverse problems with the use of PINNs can be found in [9; 22].
Despite the advancements and vast potential of PINNs for solving inverse problems demonstrated in the aforementioned works, they may not be well-suited for addressing differential equations (DEs) with multiple solutions since they rely on gradient descent optimization algorithms that converge to a single local minimum. In problems with multiple solutions, PINNs may only converge to one of the solutions, but they fail to capture other solutions. Without any prior knowledge of multiple solutions of DEs, PINNs may get trapped in a local minimum instead of exploring other minima and resulting in failure to solve inverse problems.
In this work, we consider the combination of PINNs and homotopy continuation methods to address this issue by enabling the exploration of multiple solutions for DEs. Homotopy continuation [23] is a numerical technique that traces the solution path of a given problem as the parameter changes from an initial to a final value. Hao _et al._[24] utilized the WENO scheme in homotopy continuation to resolve steady-state problems associated with hyperbolic conservation laws. Hao _et al._[25] presented a homotopy continuation-based method using domain decomposition to solve large polynomial systems resulting from discretizing nonlinear algebraic DEs. Hao _et al._[26] proposed an adaptive step-size homotopy tracking method for computing bifurcation points in nonlinear systems. Hao _et al._[27] developed a novel approach to compute multiple solutions of nonlinear DEs by adaptively constructing a spectral approximation space using a greedy algorithm. These homotopy methods have been successfully applied to solving pattern formation problems arising from math biology [28].
In this context, we proposed homotopy physics-informed neural networks (HomPINNs), where the homotopy continuation method is incorporated into the PINNs to find multiple solutions by gradually transforming the problem from a simple problem with given conditions to the target deformed problem of interest. The transformation process is accomplished by adjusting the scale of constraints in the loss function during NN training: In the training process, we consider two constraints in the loss function: one term aims to minimize the distance between the NN approximation and the given observations, while the other term enforces the approximation's conformity with the governing equations. We first ensure that the NN approximation satisfies both two constraints. As training proceeds, we adjust the magnitude of a homotopy tracking parameter to gradually focus more on the first constraint. As the homotopy tracking parameter changes, both DEs with multiple solutions and the inverse problems can be solved. This strategy makes it possible to explore the solution space more effectively and identify various local minima corresponding to different solutions of the DEs.
In contrast to traditional methods used for solving inverse problems of DEs with multiple solutions, our proposed method is advantageous over previous methods in the following aspects:
1. The proposed method provides both adaptable and flexible solutions for inverse problems.
2. The proposed method is ideal for inverse problems that are high-dimensional or computationally intense.
3. The proposed method is able to solve inverse problems even when only a limited number of DE solutions are available in the observations.
The remainder of this paper is structured as follows: The problem description of the inverse problems and the proposed HomPINN method are presented in Section 2, along with a comprehensive explanation of the construction
details. The effectiveness of our proposed HomPINNs is demonstrated through several numerical examples in Section 3. The concluding remarks and discussion of our proposed method are presented in Section 4.
## 2 The proposed framework
In this section, we first define the inverse problems associated with DEs. Next, we detail the technique we proposed to identify the correct parameters within DEs whose solution is not unique, based on observations \(\{\mathbf{x}_{i},u_{i}\}_{i=1}^{N_{o}}\). As the observations contain different solutions of DEs and are unclassified, it is imperative to discuss how to classify those observations to different solutions of DEs. We finally delve further into inferring unknown parameters in DEs from the given observations.
### Problem description
We consider the following nonlinear DE:
\[\begin{cases}\mathcal{L}u\left(\mathbf{x}\right)=f\left(u,\mathbf{x};\mathbf{\lambda} \right),\ \ \mathbf{x}\in\Omega,\\ \mathcal{B}u\left(\mathbf{x}\right)=b\left(\mathbf{x}\right),\ \ \mathbf{x}\in\partial \Omega.\end{cases} \tag{1}\]
Here \(\mathcal{L}\) is a linear differential operator, \(f(u,\mathbf{x};\mathbf{\lambda})\) is a nonlinear forcing term with an unknown parameter vector \(\mathbf{\lambda}\), solution \(u\), and \(\Omega\in\mathbb{R}^{d}\) (\(d\) denotes the dimension of the input \(\mathbf{x}\)) is the domain of interest. \(\mathcal{B}\) is the operator for the boundary condition, and \(b\left(\mathbf{x}\right)\) is the force function at the boundary. Even under the homogeneous boundary condition, i.e., \(b(\mathbf{x})=0\), it is still possible that the solution of (1) is not unique and that the exact number of solutions is not known. Given the observation set, \(\{\mathbf{x}_{i},u_{i}\}_{i=1}^{N_{o}}\), where \(u_{i}\) is randomly observed from one of the solutions satisfying (1) and \(N_{o}\) is the total number of observations, the present study aims to determine the value of \(\mathbf{\lambda}\) in (1).
### Neural networks to solve differential equations with multiple solutions
By leveraging the observations \(\{\mathbf{x}_{i},u_{i}\}_{i=1}^{N_{o}}\), collocation points \(\{\mathbf{x}_{j}\}_{j=1}^{N_{o}}\) (\(N_{c}\) is the total number of collocation points), and governing equation (1), we are going a PINN to find the multiple solutions of (1). A general framework of the NN can be found in Fig. 1. Referring to [8], we are constructing a NN that can yield an approximation \(\hat{\mathbf{u}}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})\) from input \(\mathbf{x}_{i}\), network parameters \(\mathbf{\theta}\), and an unknown parameter vector \(\mathbf{\lambda}\). Note that \(\hat{\mathbf{u}}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})\) is actually a concatenation of \(\hat{u}_{m}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})\) (\(1\leq m\leq M\)), where \(\hat{u}_{m}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})\) attempts to approximate one of the solutions of (1), and \(M\) is an estimated total number of solutions for (1).
**Remark 1**.: _One concern is that the estimated \(M\) does not match the true quantity of solutions for (1). We will explore how to accurately identify its correct value in the experimental parts (see 3.1 and 3.2)._
Figure 1: Schematic of one PINN for the inverse problems: given the observations \(\{\mathbf{x}_{i},u_{i}\}_{i=1}^{N_{o}}\) and collocation points \(\{\mathbf{x}_{j}\}_{j=1}^{N_{o}}\). PINN makes prediction \(\hat{\mathbf{u}}\), where \(\hat{\mathbf{u}}\) is a concatenation of \(\hat{u}_{m}\) (\(m=1,2,\cdots,M\)). Here \(M\) is an estimated total number of solutions of (1), \(\hat{u}_{m}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})\) approximates of one of the solutions of (1). Two constraints (purple rectangular boxes) will be considered here to constrain PINNs during the optimization.
For DEs with multiple solutions within the same domain, a follow-up question is to design an appropriate loss function to classify observations into different individual solutions. Merely decreasing the distance (which is measured by traditional loss functions such as mean squared errors, mean absolute errors, etc.) between the NN predictions and observations is insufficient, as the observations from different solutions of (1) are unclassified. It could lead to a poor generalization ability of the NNs. A more intelligent strategy would be to classify the observations while reducing the errors between the NN predictions and observations, namely,
\[L(\mathbf{\theta},\mathbf{\lambda})=\sum_{i=1}^{N_{v}}\min\|\hat{u}_{m}(\mathbf{x}_{i};\bm {\theta},\mathbf{\lambda})-u_{i}\|^{2}+\alpha\sum_{j=1}^{N_{v}}\sum_{m=1}^{M}\big{(} \mathcal{L}\hat{u}_{m}(\mathbf{x}_{j};\mathbf{\theta},\mathbf{\lambda})-f(\hat{u}_{m}(\mathbf{ x}_{j};\mathbf{\theta},\mathbf{\lambda}),\mathbf{x}_{j};\mathbf{\lambda})\big{)}^{2}. \tag{2}\]
Here \(\alpha\) is a Lagrange multiplier to balance the weight between the two loss terms. By assuming that the observations are from \(M\) or fewer smooth and continuous solutions of (1), \(\hat{u}_{m}(\cdot)\) is an approximation to one of the solutions. The first term in (2) quantifies the distance between \(\hat{u}_{m}(\cdot)\) and the observations, and determines, at a specific observation point \(\mathbf{x}_{i}\), which \(\hat{u}_{m}(\mathbf{x}_{i})\) is closest to the observations at \(\mathbf{x}_{i}\). In the optimization process, we optimize \(\mathbf{\theta}\) (parameters in NNs) and \(\mathbf{\lambda}\) (parameters in DEs). Such processes enable the NN to improve its ability to classify observations into different \(\hat{u}_{m}(\cdot)\). One concern is that incorrectly classifying an observation to a specific \(\hat{u}_{m}(\cdot)\) at the beginning can cause nonsmooth and discontinuous function approximation. It may lead to a buildup of errors, which can be hard to correct later. A possible solution is to introduce the second loss term in (2) which evaluates the residual of (1) using \(\hat{u}_{m}(\cdot)\) at collocation points. By properly weighting the two terms in (2), the observations can be correctly classified.
### Physics-informed neural networks with homotopy process
While a well-trained PINN may work for some simple inverse problems, recovering unknown parameters \(\mathbf{\lambda}\) in (1) with multiple solutions is still challenging. A bad initial guess of \(\mathbf{\lambda}\) can fail the network training. By leveraging the idea of homotopy continuation [24] to start with a simpler and related problem with known solutions, track the original problem to the deformed one, and finally identify unknown parameters in (1), we developed HomPINNs to solve the inverse problem of DEs with multiple solutions. The magnitude of \(\alpha\) in the loss function (2) is utilized to trace from the constraint for satisfying both observations and DEs to the constraint only for observations. During the homotopy process, the unknown parameters \(\mathbf{\lambda}\) are adjusted to gradually match the observations until the correct \(\mathbf{\lambda}\) will make the NN approximation fit the observations perfectly. Fig. 2 illustrates a general workflow for executing HomPINNs. HomPINNs feature a shared network structure such that the parameters obtained in the previous homotopy step become the initial guess of the next step. In step \(k\) (\(1\leq k\leq K\)), the loss function becomes
\[L_{k}(\mathbf{\theta},\mathbf{\lambda})=\frac{1}{N_{o}}\sum_{i=1}^{N_{o}}\min\|\hat{u}_ {m}(\mathbf{x}_{i};\mathbf{\theta},\mathbf{\lambda})-u_{i}\|^{2}+\frac{\alpha_{k}}{MN_{c}} \sum_{j=1}^{N_{v}}\sum_{m=1}^{M}\big{(}\mathcal{L}\hat{u}_{m}(\mathbf{x}_{j};\mathbf{ \theta},\mathbf{\lambda})-f(\hat{u}_{m}(\mathbf{x}_{j};\mathbf{\theta},\mathbf{\lambda}),\mathbf{ x}_{j};\mathbf{\lambda})\big{)}^{2}, \tag{3}\]
Figure 2: Schematic of HomPINNs: a NN at step \(k\) (\(k=1,2,\cdots,K\)) is constructed as Fig. 1, where the network parameters are initialized by a previous well-trained NN at step \(k-1\). Then the observations and collocation points are used to train the NN at each homotopy step. The homotopy process starts from step \(1\) and ends in step \(K\). The unknown parameter \(\mathbf{\lambda}^{\prime}\), the optimal parameters \(\mathbf{\theta}^{\prime}\), and the solutions of (1) will be obtained finally.
where \(\alpha_{k}\) is a homotopy tracking parameter in step \(k\). The value of \(\alpha_{k}\) decreases in a monotonic fashion, i.e., \(\alpha_{0}>\alpha_{1}>\cdots>\alpha_{k}>\cdots>\alpha_{K-1}>\alpha_{K}\). As \(\alpha_{k}\) decreases, the NN will track more of the given observations, and \(\lambda\) will also be adjusted to better fit the observations. Besides, we further claim that the tracking \(\alpha_{k}\) in this study is structured to decrease exponentially:
\[\alpha_{k}=\alpha_{0}r^{k-1}. \tag{4}\]
Here \(\alpha_{0}\) serves as an initial value to start the process, and \(r\) is the decay rate during the homotopy process. The training process of HomPINNs is summarized in Fig. 2 and algorithm 1. With the given observations, collocation points, and parameter initializations, HomPINNs consist of \(K\) homotopy steps inside which the optimization is performed. The present study uses the Adam optimizer [29] with a learning rate of \(10^{-3}\) and beta values of \(0.9\) and \(0.99\) in each homotopy step. Upon finishing \(K\) homotopy steps, the optimized parameters are produced as the final output.
```
1:Input Observations \(\{\mathbf{x}_{i},\mathbf{u}_{i}\}_{i=1}^{N_{k}}\).
2:Input Collocations \(\{\mathbf{x}_{j}\}_{j=1}^{N_{k}}\).
3:Initialize DE parameter \(\mathbf{\lambda}_{0}\).
4:Initialize PINN parameters \(\mathbf{\theta}_{0}\).
5:Initialize Number of outputs \(M\).
6:Initialize Homotopy tracking parameters \(\alpha_{0}\), \(r\)
7:for\(k=1,2,\cdots,\)\(K\)do
8: Update homotopy tracking parameter \(\alpha_{k}\) from (4).
9: Initialize \(\mathbf{\theta}=\mathbf{\theta}_{k-1}\), \(\mathbf{\lambda}=\mathbf{\lambda}_{k-1}\),
10: Optimize parameters \([\mathbf{\theta}_{k},\mathbf{\lambda}_{k}]=\operatorname*{argmin}_{\mathbf{\theta},\mathbf{ \lambda}}L_{k}(\mathbf{\theta},\mathbf{\lambda})\), where \(L_{k}(\mathbf{\theta},\mathbf{\lambda})\) is computed from (3).
11:endfor
12:Output DE parameters \(\mathbf{\lambda}_{K}\).
13:Output PINN parameters \(\mathbf{\theta}_{K}\).
```
**Algorithm 1** The training process of the proposed HomPINNs.
**Remark 2**.: _In certain situations, particularly when the number of solutions of (1) is large or the parameter space of \(\lambda\) is in high-dimension, HomPINNs may output values of \(\lambda^{*}\) that are not accurate enough. Our strategy is to regard the value of \(\lambda^{*}\) as the hyper-parameter in the homotopy physics-informed neural network for the forward problems (we will call it forward HomPINNs in the following) [30] and train it to obtain the optimized network parameters \(\mathbf{\theta}^{*}\). Once the training is completed, the optimized parameters \(\lambda^{*}\) and \(\mathbf{\theta}^{*}\) will initialize the NN for inverse problems (\(\mathbf{\lambda}_{0}=\lambda^{*}\) and \(\mathbf{\theta}_{0}=\mathbf{\theta}^{*}\)). Then we can run the proposed HomPINNs again to identify the correct \(\lambda\)._
## 3 Numerical examples
In this section, we conduct numerical experiments to verify the proposed methodology. We begin with one-dimensional problems to study the effect of hyper-parameters on the algorithm's performance and to provide general rules of thumb for implementing the method. Building on the above experiment, we proceed to solve a two-dimensional elliptic DE problem. All the tests are run on a desktop with the following specifications: Intel Core i9-10920X CPU, RTX 3090 GPU, and 64 GB DDR4 RAM memory. The number of collocation points is chosen such that the model performance is not influenced.
### 1D example with two solutions
The first one-dimensional DE considered is from [25] and can be expressed as:
\[\begin{cases}\dfrac{\partial^{2}u(x)}{\partial x^{2}}=-\lambda- \lambda u^{4},\ \ x\in(0,1)\\ \dfrac{\partial u(x)}{\partial x}\bigg{|}_{x=0}=u(x)|_{x=1}=0.\end{cases} \tag{5}\]
With \(\lambda=1.20\), (5) has two solutions, and 80 observations are randomly collected from these two solutions. As there are two solutions, \(\{u_{1}(x_{i}),u_{2}(x_{i})\}\), at a specific observation location \(x_{i}\), \(u_{i}\), the observation at \(x_{i}\), is randomly selected from \(\{u_{1}(x_{i}),u_{2}(x_{i})\}\). Also, all the observation locations \(\{x_{i}\}_{i=1}^{N_{o}}\) are randomly selected in the domain of interest. Such a strategy of generating observations is applied in all the tests in the present study.
We first discuss how to determine \(M\) in HomPINN. The NN used in this experiment is a fully connected network, with 1, 30, 30, 30, and \(M\) neurons in each layer, respectively. The network is initialized following the strategy in [31] to start the training in the first homotopy step. Table 1 summarizes the results with different \(M\). We observe sudden changes in the identified values of \(\lambda\), training loss, and testing loss when \(M\) changes from 1 to 2. This behavior suggests that the exact number of solutions should not be one. Additionally, when \(M\) increases from 2 to 3 or 4, the values of the identified \(\lambda\) from HomPINN do not change, which implies that there should be two solutions, i.e., \(M=2\). Moreover, the results in Table 1 also indicate that even when \(M\) is set to a value greater than the exact number of solutions, HomPINN can still accurately identify \(\lambda\). In light of the above results, the correct number of \(M\) can be estimated by trial and error.
Moreover, the predicted results made by the optimized HomPINNs are illustrated in Fig. 3 with \(M=2\), along with the observations from the two different solutions of (5).
The change of training loss during the homotopy processes for different \(M\) is depicted in Fig. 4. When using \(M=1\), the training loss maintains a value of \(2\cdot 10^{-2}\) consistently across the homotopy process. This behavior also illustrates that \(M=1\) is not a wise choice for this example. On the other hand, when \(M\) is set as 2, 3, or 4, the training losses of HomPINNs decrease as the homotopy steps proceed and converge at nearly a sub-linear rate.
Our next task is to examine the effects of the number of observations (\(N_{o}\)). Based on the prior results, we use \(M=2\) and decrease \(N_{o}\) from 80 to 60, 40, and 20, and the results are shown in Table 2. A smaller amount of observations produce a larger error of the identified \(\lambda\), training loss, and testing loss, yet they remain on a similar level. These results indicate that the proposed method can still be effective even when the amount of observations is limited. This property is favorable when computing resources are constrained. Boosting the number of observations helps to achieve a higher level of accuracy from HomPINNs.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(M\) & PARAM value & PARAM error & Train loss & Test loss \\ \hline
1 & 1.3258 & 1.26E-1 & 2.04E-2 & 1.18E-1 \\ \hline
2 & 1.2000 & 3.90E-6 & 1.89E-9 & 2.13E-6 \\ \hline
3 & 1.2000 & 4.60E-6 & 4.63E-9 & 2.07E-6 \\ \hline
4 & 1.2000 & 1.50E-6 & 2.98E-9 & 1.98E-6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Values of the identified \(\lambda\) from HomPINN, errors of the identified \(\lambda\) from HomPINN, training losses, and testing losses, using different input \(M\) to HomPINN. The correct value of \(\lambda\) is 1.20.
Figure 3: Observations sampled from the two solutions of (5) and predictions made from HomPINNs. Here the x-axis is \(x\), and the y-axis is \(u\). The black circles are the sampled observations. The red solid line with squares and the blue solid line with stars are the predicted \(u\) values from HomPINNs. The correct parameter value \(\lambda=1.20\), and with the correct estimate of \(M=2\), the identified parameter given by HomPINNs \(\lambda\) is 1.200.
### 1D example with multiple solutions
We delve into a more intricate example having 7 solutions [26]:
\[\left\{\begin{aligned} &\frac{\partial^{2}u(x)}{\partial x^{2}}=u^{4}-pu^{2}, \ \ x\in(0\,1),\\ &\left.\frac{\partial u(x)}{\partial x}\right|_{x=0}=\left.u(x) \right|_{x=1}=0.\end{aligned}\right. \tag{6}\]
The proposed HomPINN attempts to specify the value of \(p\) given 210 observations (\(N_{o}=210\)). The observations \(\{x_{i},u_{i}\}_{i=1}^{210}\) are randomly selected from 7 solutions of (6) with the exact value of \(p=18.00\). However, we discover that using HomPINN once is unable to give an accurate enough output of \(p\) because the current example is more complex. To alleviate this inaccuracy, we implement the strategy in Remark 2. Our test shows that two processes of HomPINN are sufficient in this example.
Table 3 shows the outputs of the two processes of HomPINN with different \(M\). We observe the same jump of errors when we move from \(M=6\) to \(M=7\), implying that (6) should have 7 solutions. More insights can be found in Fig.
5. Fig. 5(a) displays predictions from one complete process of HomPINN, and the corresponding output of \(p\) is 17.3516. Then, we use \(p=17.3516\) as an input to train the forward HomPINNs, whose output of \(u\) are depicted in Fig. 5(b). Then, we use \(p=17.3516\) and the NN parameters of the trained forward HomPINNs to start the second
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(N_{o}\) & PARAM value & PARAM error & Train loss & Test loss \\ \hline
20 & 1.2000 & 3.94E-5 & 6.35E-9 & 9.49E-6 \\ \hline
40 & 1.2000 & 2.99E-5 & 5.81E-9 & 8.60E-6 \\ \hline
60 & 1.2000 & 1.13E-5 & 2.26E-9 & 7.66E-6 \\ \hline
80 & 1.2000 & 0.39E-5 & 1.89E-9 & 2.13E-6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Values of the identified parameter from HomPINNs, errors of the identified parameter from HomPINN, training loss, and testing loss when different numbers of observations are used in HomPINNs. The correct value of \(\lambda\) is 1.20.
Figure 4: Training loss during the homotopy steps with \(M=1,2,3,4\). Here the x-axis indicates the number of epochs during the homotopy process, where \(\alpha_{k}\) (\(k=1,2,\cdots,11\)) is the index of each homotopy step. And the y-axis is the training loss in the logarithmic scales.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(M\) & 1st PARAM & 2rd PARAM & PARAM error & Train loss \\ \hline
5 & 17.3050 & 17.2868 & 7.13E-1 & 1.51E-1 \\ \hline
6 & 17.4538 & 17.6210 & 3.79E-1 & 3.65E-2 \\ \hline
7 & 17.3516 & 17.9982 & 1.84E-3 & 2.15E-5 \\ \hline
8 & 17.2918 & 17.9987 & 1.26E-3 & 1.79E-5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Values of the identified parameter in the first and second homotopy process of HomPINN, and training losses. The correct value of \(p\) is 18.00.
homotopy process of HomPINN, and its results are shown in Fig. 5(c). As shown in Fig. 5(b), even though ruining HomPINN one homotopy process only outputs a value of \(p\) that are not very accurate, HomPINN provides a better prediction of \(u\) with the given value of \(p\) from HomPINN. In other words, the NN parameters (\(\mathbf{\theta}\)) in the trained forward HomPINN are closer to their global optimum. As a result, using these NN parameters to restart the second homotopy process of HomPINNs helps to find a more accurate value of \(p\). Once \(p\) is specified from HomPINNs, we use another forward HomPINNs to obtain all the solutions of (6), as shown in Fig. 5(c).
In the next test, the observations \(\{x_{i},u_{i}\}_{i=1}^{180}\) are randomly selected from three of the seven solutions (see black circles in Fig. 6(a)). Under this configuration, we found that the trained HomPINNs after one homotopy process can recover \(p\) well. We subsequently apply forward HomPINNs with the recovered value of \(p\) to predict \(u\), as shown in Fig. 6(b). The value and error of \(p\) from the forward HomPINNs and training losses are also shown in Table 4.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(M\) & PARAM value & PARAM error & Training loss \\ \hline
1 & 6.1780 & 7.13E-1 & 8.86E0 \\ \hline
2 & 15.9029 & 3.79E-1 & 1.46E0 \\ \hline
3 & 17.9983 & 1.84E-3 & 5.77E-5 \\ \hline
4 & 17.9989 & 1.26E-3 & 1.62E-5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Values of the identified parameter (\(p\)) from HomPINN, errors of the identified parameters from HomPINN, and final losses. The observation are sampled from three of the seven solutions (only partial information is given). The correct value of \(p\) is 18.
Figure 5: Results from two processes of HomPINN. Here the x-axis is \(x\), and the y-axis is \(u\). The black circles are observations sampled from the 7 solutions, and seven solid lines represent the predicted solutions of (6). (a): the results after the first homotopy process of HomPINNs whose output of \(p\) is 17.3516. (b): the results from the forward HomPINNs given parameter \(p=17.3516\). (c): the second homotopy process of HomPINNs whose output of \(p\) is 17.9982.
It is encouraging to note that the proposed HomPINNs are able to specify the unknown parameters accurately even when the observations do not contain all the solutions of a DE. In fact, observing fewer solutions can simplify HomPINN training as overlapping or intersecting among different solutions can make the training more challenging, in particular when the DEs is defined in a high-dimension domain. Once the unknown parameter is specified (\(p=17.9983\)), the forward HomPINN are utilized to find all the solutions of (6) (see Fig. 6(b)). The training loss in HomPINN to specify \(p\) is shown in Fig. 7.
### Gray-Scott simulations
The Gray-Scott model mathematically represents the reaction-diffusion process of two chemical species interacting with each other and their environment. Its origins date back to the 1980s when Gray and Scott [32] first introduced it. Since then, it has been used to simulate a wide range of phenomena, such as pattern formation [28], chemical waves [33], and chemical turbulence [34]. The model describes the changes in concentration of the two species, \(A(x,y,t)\) and \(S(x,y,t)\). The present study considers the steady state of the Gray-Scott model with the zero-flux boundary condition:
\[\begin{cases}D_{A}\Delta A+SA^{2}-(\mu+\rho)A=0,\\ D_{S}\Delta S-SA^{2}+\rho(1-S)=0,\\ \mathbf{n}\cdot\nabla A|_{\partial\Omega}=\mathbf{n}\cdot\nabla S|_{\partial \Omega}=0.\end{cases} \tag{7}\]
Here \(\mathbf{n}\) denotes the outward normal at the domain boundary, the two species have diffusion coefficients represented by \(D_{A}\) and \(D_{S}\), respectively. Parameter \(\rho\) denotes the constant supply of species \(S\) into the system, and \(\mu\) represents the rate of decay of species \(A\).
Figure 6: The results after one homotopy process of HomPINNs and the predictions made by the forward HomPINNs after the parameter is recovered. Here x-axis is \(x\), and the y-axis is \(u\). The black circles are observations sampled from three of the seven solutions, and solid lines represent the predicted solutions of (6). Left: the trained HomPINNs after one homotopy process. The recovered parameter: \(p=17.9983\). Right: all the potential solutions of (6) predicted by the forward HomPINNs given \(p=17.9983\) from the proposed HomPINN.
Figure 7: The training loss in HomPINN of the example in 3.2. Here \(\alpha_{k}\) in the x-axis is the indices of the homotopy step, and the y-axis is the training loss (logarithmic scale). The observations for this training are sampled from three of the seven solutions, and the black solid line is the training loss in HomPINN when \(M=3\). The training loss decreases within each homotopy step, and the output value of \(p\) is17.9983.
To generate observations, we use \(D_{A}=2.5\times 10^{-4}\), \(D_{S}=5\times 10^{-4}\), \(\rho=4.0\times 10^{-2}\), and \(\mu=6.5\times 10^{-2}\). The observations do not contain all the solutions of (7) with the parameters chosen. Our goal is to utilize HomPINNs to identify the values of the parameters and retrieve all the solutions of (7) given the parameters from HomPINNs. To tackle this complicated task, we first employed trial and error to determine the number of observations (\(N_{o}\)) and NN structures (the number of neurons in each layer and the number of layers) that are most effective for HomPINNs.
We first randomly generate a total of 10,000 observations \(\{\mathbf{x}_{i},\mathbf{u}_{i}\}_{i=1}^{10,000}\) in the four known solutions (see black circles in Fig. 8). Then We randomly select 1,000 and 5,000 samples from \(\{\mathbf{x}_{i},\mathbf{u}_{i}\}_{i=1}^{10,000}\) as the observations input to HomPINNs.
We also investigate the network structure by altering the number of neurons in each layer (width) and the number of layers. Table 5 summarizes the values of the identified parameters from HomPINNs and training loss under different conditions of the number of observations, network width, and network layers. We discover that HomPINNs can fail for several reasons, such as limited observations, inadequate network width, or insufficient network layers. HomPINNs become more effective when the number of observations exceeds 5,000, the network width is larger than 128, and the number of layers is greater than 6. When the number of given observations is too small, HomPINNs may not be able to distinguish different patterns (solutions) in the observations accurately. This can lead to overfitting of the model to the observations, which negatively impacts its generalization capacity. The number of neurons and layers in a NN determines the width and depth of the network. If one of them is too small, HomPINNs may not be able to capture the complexity of the observations accurately and unable to generalize the patterns of solutions well. The performance of HomPINNs can be enhanced by increasing the number of observations, network width, and the number of network layers to capture complex patterns in observations and provide more accurate predictions.
Based on the above analysis, we opted to select 5,000 observations randomly from the four known solutions (see Fig. 9) and configure the NN with 6 hidden layers, each of which has 128 neurons.
After the first process of HomPINN, the values of the identified parameters are \(D_{A}=2.020\times 10^{-4}\), \(D_{S}=4.849\times 10^{-4}\), \(\rho=3.678\times 10^{-2}\), and \(\mu=6.882\times 10^{-2}\) (see Fig. 10 and Table 5), which indicates that HomPINNs recover the unknown parameters to some extent. Nevertheless, it is unable to discern different solutions associated with the observations. From Fig. 10, the results produced by HomPINN may mix observations from multiple solutions, yet the given solutions are still satisfying the governing equation (7).
We implement the strategy in Remark 2 to improve the accuracy. Using the parameters specified by the first homotopy process HomPINN as inputs, we perform the forward HomPINNs following the process outlined in [30]. Fig. 11 depicts the forward HomPINN predictions to \(A(\cdot)\) and \(S(\cdot)\), which are closer to the exact solution illustrated in Fig. 8, although the input parameters from the forward HomPINNs are not very accurate. Subsequently, the
Figure 8: The four known solutions of the Gray–Scott model with the chosen parameters to obtain observation input to HomPINN. These contour plots show the distribution of the component \(A(x,y)\) (top four figures) and \(S(x,y)\) (bottom four figures) on a domain of \([0,1]\times[0,1]\), with the horizontal and vertical axes representing x and y, respectively. The parameters used here: \(D_{A}=2.5\times 10^{-4}\), \(D_{S}=5\times 10^{-4}\), \(\rho=4.0\times 10^{-2}\), and \(\mu=6.5\times 10^{-2}\).
Figure 10: Predictions of the Gray–Scott model after the first homotopy process of HomPINNs. The contour plots are for component \(A(x,y)\). The horizontal and vertical axes represent x and y, respectively. The domain is \([0,1]\times[0,1]\). Learned parameters from HomPINN: \(D_{A}=2.020\times 10^{-4}\), \(D_{S}=4.849\times 10^{-4}\), \(\rho=3.678\times 10^{-2}\), and \(\mu=6.882\times 10^{-2}\).
Figure 9: Sampled observations of the Gray–Scott model in 2D. The contour plots are for the component \(A(x,y)\). The horizontal and vertical axes represent x and y, respectively, and the domain is \([0,1]\times[0,1]\). The black circles are sampled observations.
NN parameters in the trained forward HomPINNs along with the DE parameters specified from the first homotopy process of HomPINNs are used to initialize the second homotopy process of HomPINNs. With DE parameters specified in the second homotopy process of HomPINNs being closer to the correct ones, our predictions of \(A(\cdot)\) and \(S(\cdot)\) are improved in the second homotopy process of HomPINNs (see Fig. 12). Tables 6 and 7 provide a summary of the training outcomes for the two homotopy processes of HomPINNs, including the DE parameters obtained in each homotopy step, the parameter errors, and the training losses. The results demonstrate that the DE parameters can be correctly determined by HomPINNs. Once we identify the correct DE parameters from HomPINN, these parameters are input to the forward HomPINNs to discover all the solutions of the governing equation (7), as depicted in Fig. 13.
Figure 13: All the solutions of (7) from the forward HomPINNs given the parameters after the second homotopy process of HomPINN. The contour plots are for component \(A(x,y)\). The horizontal and vertical axes represent x and y, respectively. The parameters used here for the HomPINN are: \(D_{A}=2.409\times 10^{-4}\), \(D_{S}=5.064\times 10^{-4}\), \(\rho=4.076\times 10^{-2}\), and \(\mu=6.432\times 10^{-2}\).
## 4 Conclusion and Discussion
To tackle the complex behavior arising from non-uniqueness, symmetry, and bifurcations in the solution space, we proposed a novel HomPINN method for solving inverse problems in nonlinear DEs with multiple solutions. The proposed method utilizes a combination of homotopy continuation and NNs in the training process to estimate unknown parameters in the DEs and is able to produce precise predictions. We evaluated the effectiveness of our proposed method by conducting experiments on different tasks. For DEs in one dimension, we explore the effect of various factors on the model's performance, such as estimates of the number of solutions, the number of observations, and observations that do not include all the DE solutions. Next, the two-dimensional Gray-Scott problem is investigated to determine the influence of the network structure. Through the tests conducted, we determine the optimal number of observations and network structure. With the determined hyper-parameters, HomPINN successfully specifies the values of the four parameters in the Gray-Scott problem, although the input observations only have 4 of the solutions. All the solutions to the Grey-Scott problem are finally recovered with HomPINN, through the use of the parameters from the proposed HomPINN.
Our experimental results indicate the following advantages of the proposed model: 1. HomPINN offers a flexible and adaptable solution for inverse problems with large-scale datasets; 2. HomPINN is able to tackle both one and two-dimensional problems; 3. the optimal number of observations and network structure has been identified in our tests; 4. HomPINN is able to solve inverse problems even if we do not know all the solutions of a DE; 5. It is possible to discover other solutions of a DE that are not available in observations.
In the future, we will focus on creating techniques to automatically identify the number of solutions or investigating more efficient homotopy continuation methods. In addition, we aspire to evaluate our method on a wider range of problems, including those with more intricate geometries and boundary conditions, to thoroughly assess its performance and versatility.
## 5 Acknowledgements
G.L. and H.Z. gratefully acknowledge the support of the National Science Foundation (DMS-1555072, DMS-2053746, and DMS-2134209), Brookhaven National Laboratory Subcontract 382247, and U.S. Department of Energy (DOE) Office of Science Advanced Scientific Computing Research program DE-SC0021142 and DE-SC0023161.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{\(k\)} & \multicolumn{6}{c}{PARAM value} & \multicolumn{6}{c}{PARAM error} & \multirow{2}{*}{Train Loss} \\ \cline{2-3} \cline{7-10} & \(D_{A}\) & \(D_{S}\) & \(\rho\) & \(\mu\) & \(D_{A}\) & \(D_{S}\) & \(\rho\) & \(\mu\) & \\ \hline & 1.000E-5 & 2.000E-5 & 1.000E-3 & 1.000E-3 & 2.400E-4 & 4.800E-4 & 3.900E-2 & 6.400E-2 & 6.157E-3 \\
1 & 1.620E-4 & 4.709E-4 & 3.820E-2 & 8.931E-2 & 8.800E-5 & 2.911E-5 & 1.800E-3 & 2.431E-2 & 5.051E-4 \\
2 & 1.875E-4 & 4.816E-4 & 3.710E-2 & 7.155E-2 & 6.252E-5 & 1.840E-5 & 2.904E-3 & 6.552E-3 & 2.296E-5 \\
3 & 1.948E-4 & 4.821E-4 & 3.688E-2 & 6.954E-2 & 5.521E-5 & 1.795E-5 & 3.121E-3 & 4.537E-3 & 2.643E-5 \\
4 & 1.970E-4 & 4.847E-4 & 3.688E-2 & 6.909E-2 & 5.297E-5 & 1.526E-5 & 3.120E-3 & 4.088E-3 & 7.933E-6 \\
5 & 1.989E-4 & 4.836E-4 & 3.678E-2 & 6.897E-2 & 5.110E-5 & 1.639E-5 & 3.221E-3 & 3.965E-3 & 4.712E-6 \\
6 & 1.999E-4 & 4.840E-4 & 3.675E-2 & 6.884E-2 & 5.010E-5 & 1.602E-5 & 3.248E-3 & 3.842E-3 & 2.091E-6 \\
7 & 2.008E-4 & 4.841E-4 & 3.677E-2 & 6.884E-2 & 4.917E-5 & 1.586E-5 & 3.234E-3 & 3.835E-3 & 2.057E-6 \\
8 & 2.013E-4 & 4.842E-4 & 3.676E-2 & 6.885E-2 & 4.872E-5 & 1.579E-5 & 3.236E-3 & 3.848E-3 & 1.307E-6 \\
9 & 2.016E-4 & 4.848E-4 & 3.677E-2 & 6.883E-2 & 4.837E-5 & 1.522E-5 & 3.230E-3 & 3.829E-3 & 7.243E-7 \\
10 & 2.020E-4 & 4.849E-4 & 3.678E-2 & 6.882E-2 & 4.798E-5 & 1.507E-5 & 3.224E-3 & 3.816E-3 & 6.376E-7 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Values and errors of the identified parameters from the first homotopy process of HomPINNs, and training losses among the first homotopy process of HomPINNs. The correct parameters are \(D_{A}=2.5\times 10^{-4}\), \(D_{S}=5\times 10^{-4}\), \(\rho=0.04\), and \(\mu=0.065\).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{\(k\)} & \multicolumn{6}{c}{PARAM value} & \multicolumn{6}{c}{PARAM error} & \multirow{2}{*}{Train Loss} \\ \cline{2-3} \cline{7-10} & \(D_{A}\) & \(D_{S}\) & \(\rho\) & \(\mu\) & \(D_{A}\) & \(D_{S}\) & \(\rho\) & \(\mu\) & \\ \hline & 2.020E-4 & 4.849E-4 & 3.678E-2 & 6.882E-2 & 4.798E-5 & 1.507E-5 & 3.224E-3 & 3.816E-3 & 1.339E-5 \\
1 & 2.384E-4 & 5.057E-4 & 4.072E-2 & 6.424E-2 & 1.159E-5 & 5.699E-6 & 7.212E-4 & 7.581E-4 & 9.054E-6 \\
2 & 2.397E-4 & 5.059E-4 & 4.073E-2 & 6.428E-2 & 1.028E-5 & 5.889E-6 & 7.286E-4 & 7.199E-4 & 5.097E-6 \\
3 & 2.404E-4 & 5.061E-4 & 4.074E-2 & 6.429E-2 & 9.638E-6 & 6.098E-6 & 7.407E-4 & 7.072E-4 & 3.194E-6 \\
4 & 2.406E-4 & 5.063E-4 & 4.075E-2 & 6.430E-2 & 9.430E-6 & 6.298E-6 & 7.539E-4 & 7.009E-4 & 1.891E-6 \\
5 & 2.407E-4 & 5.064E-4 & 4.076E-2 & 6.430E-2 & 9.268E-6 & 6.360E-6 & 7.585E-4 & 7.009E-4 & 1.129E-6 \\
6 & 2.408E-4 & 5.063E-4 & 4.076E-2 & 6.430E-2 & 9.222E-6 & 6.339E-6 & 7.610E-4 & 7.009E-4 & 7.102E-7 \\
7 & 2.409E-4 & 5.064E-4 & 4.076E-2 & 6.431E-2 & 9.129E-6 & 6.360E-6 & 7.589E-4 & 6.945E-4 & 4.132E-7 \\
8 & 2.409E-4 & 5.064E-4 & 4.076E-2 & 6.431E-2 & 9.106E-6 & 6.396E-6 & 7.610E-4 & 6.945E-4 & 2.435E-7 \\
9 & 2.409E-4 & 5.063E-4 & 4.076E-2 & 6.432E-2 & 9.083E-6 & 6.350E-6 & 7.585E-4 & 6.818E-4 & 1.642E-7 \\
10 & 2.409E-4 & 5.064E-4 & 4.076E-2 & 6.432E-2 & 9.106E-6 & 6.360E-6 & 7.585E-4 & 6.818E-4 & 1.249E-7 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Values and errors of the identified parameters from the second homotopy process of HomPINN, and training losses among the second homotopy process of HomPINNs. The correct parameter are \(D_{A}=2.5\times 10^{-4}\), \(D_{S}=5\times 10^{-4}\), \(\rho=0.04\), and \(\mu=0.065\). |
2310.10434 | Equivariant Matrix Function Neural Networks | Graph Neural Networks (GNNs), especially message-passing neural networks
(MPNNs), have emerged as powerful architectures for learning on graphs in
diverse applications. However, MPNNs face challenges when modeling non-local
interactions in graphs such as large conjugated molecules, and social networks
due to oversmoothing and oversquashing. Although Spectral GNNs and traditional
neural networks such as recurrent neural networks and transformers mitigate
these challenges, they often lack generalizability, or fail to capture detailed
structural relationships or symmetries in the data. To address these concerns,
we introduce Matrix Function Neural Networks (MFNs), a novel architecture that
parameterizes non-local interactions through analytic matrix equivariant
functions. Employing resolvent expansions offers a straightforward
implementation and the potential for linear scaling with system size. The MFN
architecture achieves stateof-the-art performance in standard graph benchmarks,
such as the ZINC and TU datasets, and is able to capture intricate non-local
interactions in quantum systems, paving the way to new state-of-the-art force
fields. | Ilyes Batatia, Lars L. Schaaf, Huajie Chen, Gábor Csányi, Christoph Ortner, Felix A. Faber | 2023-10-16T14:17:00Z | http://arxiv.org/abs/2310.10434v2 | # Equivariant Matrix Function Neural Networks
###### Abstract
Graph Neural Networks (GNNs), especially message-passing neural networks (MPNNs), have emerged as powerful architectures for learning on graphs in diverse applications. However, MPNNs face challenges when modeling non-local interactions in systems such as large conjugated molecules, metals, or amorphous materials. Although Spectral GNNs and traditional neural networks such as recurrent neural networks and transformers mitigate these challenges, they often lack extensivity, adaptability, generalizability, computational efficiency, or fail to capture detailed structural relationships or symmetries in the data. To address these concerns, we introduce Matrix Function Neural Networks (MFNs), a novel architecture that parameterizes non-local interactions through analytic matrix equivariant functions. Employing resolvent expansions offers a straightforward implementation and the potential for linear scaling with system size. The MFN architecture achieves state-of-the-art performance in standard graph benchmarks, such as the ZINC and TU datasets, and is able to capture intricate non-local interactions in quantum systems, paving the way to new state-of-the-art force fields. The code and the datasets will be made public.
## 1 Introduction
Graph Neural Networks (GNNs) have proven to be powerful architectures for learning on graphs on a wide range of applications. Various GNN architectures have been proposed, including message passing neural networks (MPNN) (Gilmer et al., 2017; Battaglia et al., 2018; Kipf & Welling, 2017a; Velickovic et al., 2018; Wu et al., 2020; Senior et al., 2018; Hu et al., 2019; Batzner et al., 2022) and higher-order equivariant MPNNs (Batatia et al., 2022b).
MPNNs struggle to model **non-local** interactions effectively due to computational constraints and over-smoothing (Di Giovanni et al., 2023). Spectral Graph Neural Networks attempt to address the limitation of this kind by encoding the global structure of a graph using eigenvectors and eigenvalues of a suitable operator. These approaches predominantly focus on Laplacian matrices, exploiting the graph's inherent spectral features. Many spectral GNNs apply polynomial or rational filters (Bianchi et al., 2021; Gasteiger et al., 2018; Wang & Zhang, 2022; He et al., 2021; Defferrard et al., 2016; Zhu et al., 2021; Kreuzer et al., 2021) to eigenvalues of graph structures, reaching state-of-the-art accuracy on pure graphs tasks. However, these methods often exhibit rigid architectures that require extensive feature engineering, potentially limiting their adaptability to various types of graphs. Moreover, they have been restricted to non-geometric graphs, making them unsuitable for molecules and materials.
Traditional neural network architectures, such as recurrent neural networks (Elman, 1990; Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Graves, 2013) and transformers (Vaswani et al., 2017) also face challenges when modeling non-local interactions. While transformers can capture non-local dependencies through their self-attention mechanisms, they come at a significant computational cost due to their quadratic complexity with respect to the input sequence length. Furthermore, transformers lack inherent structural relationships or positional information within input data, necessitating the use of additional techniques, such as positional encodings (Shaw et al., 2018).
In chemistry and material science tasks, some models incorporate **long-range** interactions through electrostatics (Grisafi & Ceriotti, 2019; Behler, 2021; Unke & Meuwly, 2019), dispersion, or reciprocal space (Gao & Remsing, 2022; Huguenin-Dumittan et al., 2023; Kosmala et al., 2023). However, no existing architecture effectively address **non-local** interactions, where effects can propagate
over extensive distances through electronic delocalization, spin coupling, or other many-body non-local mechanisms. This is particularly problematic in systems such as large conjugated molecules, amorphous materials, or metals.
Consequently, there is a need for new neural network architectures that can efficiently and accurately model complex non-local many-body interactions, while addressing the limitations of current approaches. We propose **Matrix Function Networks** (MFN) as a possible solution to this challenge. Concretely, we make the following contributions.
* We introduce Matrix Function Networks (MFNs), a new graph neural network architecture able to model non-local interactions in a structured, systematic way and with potentially linear scaling with the size of the input.
* We introduce the resolvent expansion as a convenient and efficient mechanism to learn a general matrix function.
* We demonstrate the ability of our architecture to learn non-local interactions on a dataset of challenging non-local quantum systems.
* We show that MFNs achieve state-of-the-art performance on ZINC and TU graph datasets.
## 2 Related Work
Overlap matrix fingerprints (Zhu et al., 2016) introduced overlap matrix fingerprints (OMFPs), a vector of spectral features of an atomic environment or more generally a point cloud. Given a point cloud, an overlap operator (identity projected on an atomic orbital basis) is constructed, and its ordered eigenvalues (or other invariants) are taken as the features of that point cloud. Although a theoretical understanding of OMFPs is still lacking, computational experiments have shown excellent properties as a distance measure (Zhu et al., 2016; Parsaeifard et al., 2021).
Spectral Graph Neural NetworksSpectral GNNs (Wu et al., 2021) are GNNs that use spectral filters operating on the Fourier decomposition of the Laplacian operator of the graph. Spectral GNNs are categorized by the type of filters they apply to the spectrum of the Laplacian: ChebyNet (Deferrard et al., 2016a) approximates the polynomial function of the Laplacian using Chebychev expansions, GPRGNN (Chien et al., 2021) directly fits coefficients of a fixed polynomial, while ARMA (Bianchi et al., 2021) uses rational filters.
Equivariant Neural NetworksEquivariant neural networks are the general class of neural networks that respect certain group symmetries (Bronstein et al., 2021). Noteably, convolutional neural networks (CNNs) (LeCun et al., 1989) are equivariant to translations, while \(G\)-convolutions (Cohen and Welling, 2016; Cohen et al., 2018; Kondor and Trivedi, 2018) generalized CNNs to equivariance of compact groups. Lately, equivariant message passing neural networks (Anderson et al., 2019; Satorras et al., 2021; Brandstetter et al., 2022; Batzner et al., 2022; Batatia et al., 2022b;a) have emerged as a powerful architecture for learning on geometric point clouds. Most of these architectures have been shown to lie in a common design space Batatia et al. (2022a, 2023).
Hamiltonian LearningA natural application of equivariant neural network architectures is machine learning of (coarse-grained) Hamiltonian operators arising in electronic structure theory. This task of parameterizing the mapping from atomic structures to Hamiltonian operators is currently receiving increasing interest because of the potential extension in accessible observables over purely mechanistic models. The recent works of Nigam et al. (2022) and Zhang et al. (2022) introduce such parameterizations in terms of a modified equivariant Atomic Cluster Expansion (ACE) (Drautz, 2020), a precursor to the architecture we employ in the present work. Alternative approaches include (Hegde and Bowen, 2017; Schutt et al., 2019; Unke et al., 2021a; Gu et al., 2023).
## 3 Background
### Spectral Graph Neural Networks
We briefly review spectral graph neural networks and explain their limitations that our work will overcome. Consider a graph \(\mathcal{G}=(X,\mathcal{E})\) with node set \(X\) and edge set \(\mathcal{E}\). A graph defined purely by
its topology (connectivity) is called a pure graph. Let \(n\) denote the number of nodes in the graph, and let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) be its adjacency matrix. Define a vector of ones as \(\mathbf{1}_{n}\in\mathbb{R}^{n}\). The degree matrix of the graph is \(\mathbf{D}=\text{diag}(\mathbf{A}\mathbf{1}_{n})\), and the Laplacian matrix is \(\mathbf{L}=\mathbf{D}-\mathbf{A}\). The Laplacian is a symmetric positive semidefinite matrix and admits a spectral decomposition, \(\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{T}\), where \(\mathbf{U}\) is an orthogonal matrix of eigenvectors and \(\mathbf{\Lambda}\) is a diagonal matrix of eigenvalues.
A popular approach to learning functions on graphs is to use convolutional neural networks on the graph. Spectral graph convolutional networks (SGCNs) are a class of graph convolutional networks that use the spectral decomposition of the Laplacian matrix to define convolutional filters. Let \(s\) be a function of a graph \(\mathcal{G}\) and \(t\) be a convolutional filter. SGCNs take advantage of the spectral decomposition of the Laplacian matrix of graph \(\mathcal{G}\) to compute the convolution of \(s\) and \(t\):
\[s^{\prime}=t\star_{\mathcal{G}}s=\mathbf{U}((\mathbf{U}^{T}t)\odot(\mathbf{U} ^{T}s))=f_{\theta}(\mathbf{L})s, \tag{1}\]
where \(f_{\theta}\) is a matrix function of the Laplacian matrix \(\mathbf{L}\) of the graph \(\mathcal{G}\) and \(\odot\) the Hadamard product. Various works have proposed different matrix functions \(f_{\theta}\), such as polynomial functions (Defferrard et al., 2016), rational functions (Bianchi et al., 2021), or neural networks (Wu et al., 2021). In a recent work, Yang et al. (2023) proposed to extend filter functions to matrix functions of other **fixed** matrix representations of the graph. Let \(\mathcal{H}(\mathcal{G})\) denote the space of all graph operators on graph \(\mathcal{G}\), i.e., the space of linear self-adjoint and permutation equivariant operators from nodes to nodes. For any **fixed**\(\mathbf{H}\in\mathcal{H}(\mathcal{G})\), generalized spectral GNNs determine filters based on \(\mathbf{H}\) with a matrix function:
\[s^{\prime}=f_{\theta}(\mathbf{H})s, \tag{2}\]
where \(f_{\theta}\) is a matrix function of \(\mathbf{H}\). Two non-isomorphic graphs can share the same spectrum of their Laplacian operators, while they have different spectra for other graph operators (Johnson and Newman, 1980). Therefore, the use of other graph operators as a basis for matrix functions can be beneficial for learning functions on graphs that are strictly more expressive than Laplacian SGCNs. However, this approach has two main **limitations**.
* **Expressiveness:** Performing only convolutions with a **fixed** graph operator \(\mathbf{H}\) limits the expressivity of the model. Choosing the most expressive graph operator requires problem-dependent feature engineering.
* **Lie-group symmetries:** The approaches proposed so far have been restricted to **pure** graphs and in particular do not use additional symmetries for graphs embedded in a vector space. For example, graphs embedded in \(\mathbb{R}^{3}\) often lead to \(E(3)\)-equivariant learning tasks.
To overcome these limitations, we propose MFNs in Section 4, which allow parameterization of both the graph operator \(\mathbf{H}\) and the matrix function \(f_{\theta}\), and can be formulated to preserve the equivariance under all known group actions. Since a learnable operator \(\mathbf{H}\) prevents the precomputation of diagonalization of \(\mathbf{H}\) during training, we also introduce a method that avoids diagonalization and allows (in principle) linear scaling with the number of nodes.
### Equivariant Message Passing Neural Network
Equivariant Message Passing Neural Networks (MPNNs) Batzner et al. (2022); Batatia et al. (2022) are graph neural networks that operate on graphs \(\mathcal{G}=(X,\mathcal{E})\) embedded in a vector space \(V\). The nodes \(x_{i}\in X\) are no longer only a list of indices, but belong to a configuration space \(\Omega\) that extends the vector space \(V\). For example, in the atomistic point cloud, \(x_{i}=(i,\mathbf{r}_{i},\theta_{i})\in\Omega:=\mathbb{N}\times\mathbb{R}^{3}\times \mathbb{Z}\) describing the positions and chemical species of each atom through which the graph is embedded into \(V:=\mathbb{R}^{3}\). The case of the **pure** graph can be recovered by setting \(\Omega=\mathbb{N}\). We are interested in learning graph maps of the form
\[\Phi\colon\mathcal{G}\to Z \tag{3}\]
where \(Z\) is an abstract target space, usually a vector space. As the input is a graph, we impose the mapping to be permutation invariant (invariant under relabeling of the nodes). In many applications, the target properties satisfy additional symmetries: When a group \(G\) acts on both \(\Omega\) (and, therefore, on \(\mathcal{G}\)) and \(Z\), we say that \(\Phi\) is \(G\)-equivariant if,
\[\Phi\circ g=\rho(g)\Phi\qquad\forall g\in G, \tag{4}\]
where \(\rho\) is a representation of the group on the vector space \(Z\). A typical strategy is then to embed the nodes \(x_{i}\in X\) into a feature space, where a suitable representation of the group is available.
We represent the state of each node \(\sigma_{i}\) in the layer \(t\) of the MPNN by a tuple,
\[\sigma_{i}^{(t)}=(x_{i},\mathbf{h}_{i}^{(t)}), \tag{5}\]
where \(x_{i}\) defines the collection of node attributes of the graph as defined previously and \(\mathbf{h}_{i}^{(t)}\) are its learnable features. A forward pass of the network consists of multiple _message construction_, _update_, and _readout_ steps. During message construction, a message \(\mathbf{m}_{i}^{(t)}\) is created for each node by pooling over its neighbors,
\[\mathbf{m}_{i}^{(t)}=\bigoplus_{j\in\mathcal{N}(i)}M_{t}\big{(}\sigma_{i}^{(t)}, \sigma_{j}^{(t)}\big{)},\quad\mathbf{h}_{i}^{(t+1)}=U_{t}\big{(}\sigma_{i}^{(t)}, \mathbf{m}_{i}^{(t)}\big{)},\quad\Phi(\mathcal{G})=\phi_{\text{out}}\bigg{(}\Big{\{} \big{\{}\mathcal{R}_{t}\big{(}\sigma_{i}^{(t)}\big{)}\big{\}}_{i}\Big{\}}_{t} \bigg{)}. \tag{6}\]
where the individual operations have the following meaning:
* \(M_{t}\) is a learnable message function and \(\bigoplus_{j\in\mathcal{N}(i)}\) is a learnable, permutation invariant pooling operation over the neighbors of atom \(i\) (e.g., a sum);
* \(U_{t}\) is a learnable update function, transforming the message \(\mathbf{m}_{i}^{(t)}\) into new features \(\mathbf{h}_{i}^{(t+1)}\);
* \(\mathcal{R}_{t}\) is a learnable node readout, mapping node states \(\sigma_{i}^{(t)}\) to the per-node outputs;
* \(\phi_{\text{out}}\) is a global readout map, typically \(\phi_{\text{out}}(\{\{\mathcal{R}_{i}^{(t)}\}_{i}\}_{i})_{t})=\sum_{i,t} \mathcal{R}_{i}^{(t)}\).
Equivariant MPNNs are widely used for learning properties of 3D point clouds, such as molecules and materials. However, there are several limitations to their expressiveness,
* **Non-local :** MPNNs are restricted to a finite range, dictated by the number of iterations \(T\). A large number of properties in physics are long-range, which requires \(T\to\infty\). Adding a large number of layers leads to high computational overhead and poor expressivity due to oversmoothness (Di Giovanni et al., 2023).
* **Correlation order:** Most MPNNs use two body messages, which means that the pooling in Equation 6 is just a sum. The MACE architecture (Batatia et al., 2022b) extended the message construction to an arbitrary order, but increasing the order is still computationally demanding, especially for large \(T\).
## 4 Matrix Function Neural Networks
### General Matrix Function Neural Networks
Our MFN models act in the space of group equivariant matrix operators of the graph, \(\mathcal{H}(\mathcal{G})^{G}\), with \(G\) a reductive Lie group. In the same setting as in section 3.2, let \(\mathcal{G}\) be an undirected graph with \(n\) nodes, and the states \(\sigma_{i}\) are embedded in a configuration space \(\Omega\). The architecture operates in three stages at each layer, the **matrix construction**, the **matrix function**, and the **update**.
Matrix constructionThe space of matrix operators on graphs, \(\mathcal{H}(\mathcal{G})^{G}\), corresponds to the space of operators that are (1) **self-adjoint**, (2) **permutation equivariant**, and (3) **G-equivariant**. We consider operators expanded in a basis \(\{\phi_{m}\}_{m}\) with an available group action on each basis element. These operators correspond to square matrices \(\mathbf{H}\in\mathbb{C}^{bn\times bn}\) following basis truncation, where \(b\) is the basis size. Matrix blocks \(\mathbf{H}_{ij}\) have dimensions \(b\times b\), and the basis is indexed by a tuple \((m_{1},m_{2})\). An extra channel dimension, denoted \(c\), is introduced to learn multiple operators. Matrix entries in \(\mathbf{H}\) can generally be \(G\)-equivariant functions of the neighborhoods \(\mathcal{N}(i)\) and \(\mathcal{N}(j)\),
\[H^{(t)}_{cij,m_{1}m_{2}}=\sum_{\hat{c}}W^{(t)}_{\hat{c}c}\phi^{(t)}_{\hat{c}m_ {1}m_{2}}(\{\sigma_{k}^{(t)}\}_{k\in\mathcal{N}(i)\cup\mathcal{N}(j)}). \tag{7}\]
Figure 1: **Matrix function network architecture. Illustrating matrix construction and non-locality of matrix functions on a chain graph.**
The choice of basis \(\phi\) depends on the application (see the SI for detailed examples). In the case of the rotation group, any such function can be approximated with arbitrary accuracy using an ACE or MACE layer (Zhang et al., 2022; Batatia et al., 2022b). This result has been generalized to any reductive Lie group, using the \(G\)-MACE architecture (Batatia et al., 2023). It is important to note that one could use any equivariant architecture to learn these matrix entries. However, the use of more expressive approximators will result in better coverage of the operator space \(\mathcal{H}(\mathcal{G})^{G}\) and therefore better general expressivity. The full matrix inherits this equivariance of the basis,
\[\mathbf{H}_{ij}\circ g=\boldsymbol{\rho}(g)\mathbf{H}_{ij}\boldsymbol{\rho}^{ \ast}(g),\quad\forall g\in G \tag{8}\]
where \(\rho\) is an orthogonal matrix that denotes the group action of \(G\) on the basis element \(\phi\).
Matrix functionThe central operation in MFNs, which introduces long-range many-body effects, is the matrix function. Any continuous function \(f_{\theta}:\mathbb{R}\rightarrow\mathbb{R}\), with parameters \(\theta\), can be interpreted as acting on self-adjoint matrices \(\mathbf{H}\) via their spectral decomposition. If \(\mathbf{H}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\mathbf{T}}\) with \(\mathbf{U}\) orthogonal and \(\mathbf{\Lambda}\) diagonal. Alternatively, if the function is analytic, it can be extended to a matrix function by converting its power series into a formal power matrix series,
\[f_{\theta}(\mathbf{H})=\mathbf{U}f_{\theta}(\mathbf{\Lambda})\mathbf{U}^{ \mathbf{T}}. \tag{9}\]
where \(c_{a}(\theta)\) are the weights of the power series of \(f_{\theta}\). An essential observation is that **any** continuous matrix function \(f_{\theta}\) preserves equivariance,
\[f_{\theta}(\mathbf{H}\circ g)=\boldsymbol{\rho}(g)f_{\theta}(\mathbf{H}) \boldsymbol{\rho}^{\ast}(g),\quad\forall g\in G. \tag{10}\]
The matrix function can be related to a high-order many-body equivariant function via the Cayley-Hamilton theorem. The eigendecomposition in Equation 9 is responsible for the non-locality of our approach. In practice, computing matrix functions is expensive as it requires diagonalization, scaling as \(n^{3}\) with the number of nodes. Many approaches are available to approximate matrix functions, such as Chebyshev polynomials or rational approximation, which can leverage potentially cheaper evaluation schemes. Furthermore, the matrix \(\mathbf{H}\) is sparse in many applications, which can be further exploited to reduce computational cost. To further optimize this, we propose to employ a resolvent expansion to parameterize \(f_{\theta}\), detailed in Section 4.2. Similar approaches have been successfully applied to large matrices in other fields such as electronic structure calculations (Lin et al., 2009; 2013).
UpdateMultiple updates can be defined from the matrix function. All of the following updates differ fundamentally from the standard spectral GNN update, which is a filtering operation.
1. The **diagonal update** updates the state of each node with non-local features extracted from the diagonal blocks of the matrix function, \[\boldsymbol{h}_{ic}^{(t+1)}=\sum w_{c\bar{c}}^{(t)}f_{\theta}^{(t)}(\mathbf{H }_{\bar{c}}^{(t)})_{ii}\] (11) This method is the most computationally efficient since selected inversion techniques (Lin et al., 2009) can be employed to efficiently evaluate the diagonal blocks of a matrix function; cf. Section 4.2.
2. The **dense update** utilizes the entire matrix, including all **off-diagonal terms**, to update the next matrix function, \[\boldsymbol{h}_{ic}^{(t+1)}=\sum_{\bar{c}}w_{c\bar{c}}^{(t)}f_{\theta}^{(t)}( \mathbf{H}_{\bar{c}}^{(t)}+f_{\theta}^{(t-1)}(\mathbf{H}_{\bar{c}}^{(t-1)}))_ {ii}\] (12) This update can not be performed using selected inversion techniques, but it can offer additional expressiveness guarantees.
3. The **sparse update:** uses only the parts of the matrix corresponding to the connected nodes to update the nodes and edges of the graph to obtain the matrix function in the next layer, \[\boldsymbol{h}_{ic}^{(t+1)}=\sum_{\bar{c}}w_{c\bar{c}}^{(t)}f_{\theta}^{(t)}( \mathbf{H}_{\bar{c}}^{(t)}+\{f_{\theta}^{(t-1)}(\mathbf{H}_{\bar{c}}^{(t-1)})_ {ij}\}_{j\in\mathcal{N}(i)})_{ii}.\] (13)
ReadoutThe readout phase is the same as the usual MPNN readout in Equation 6.
### Resolvent parameterization of matrix functions
The evaluation of the matrix function in Equation 9 is the practical bottleneck of our method. The cost of the evaluation depends on the choice of parameterization of the univariate function \(f_{\theta}\). For a general analytic \(\tilde{f}:\mathbb{C}\rightarrow\mathbb{C}\), resolvent calculus allows us to represent
\[\tilde{f}(\mathbf{H})=\oint_{\mathcal{C}}\tilde{f}(z)(z\mathbf{I}-\mathbf{H})^{- 1}\frac{dz}{2\pi i}, \tag{14}\]
where \(\mathcal{C}\) is a curve encircling the eigenvalues of \(\mathbf{H}\) and excluding any poles of \(\tilde{f}\). Approximating the contour integration with a quadrature rule with nodes \(z_{s}\) and weights \(w_{s}\) yields \(\tilde{f}(\mathbf{H})\approx\sum_{s}w_{s}\tilde{f}(z_{s})(z_{s}\mathbf{I}- \mathbf{H})^{-1}\), and merging \(w_{s}:=\tilde{w}_{s}\tilde{f}(z_{s})\) we arrive at the parameterization
\[f_{\theta}(\mathbf{H})=\sum_{s}w_{s}(z_{s}\mathbf{I}-\mathbf{H})^{-1}. \tag{15}\]
Pole expansions for evaluating matrix functions have a long and successful history, especially when the arguments are sparse matrices (Higham, 2008). The novelty in equation 15 over standard usage of pole expansions in computing matrix functions (Higham, 2008) is that both the _weights_\(w_{s}\) and the _poles_\(z_{s}\) are now learnable parameters.
The derivation shows that in the limit of infinitely many pole-weight pairs \((z_{s},w_{s})\) any analytic matrix function can be represented. Since analytic functions are dense in the space of continuous functions, this means that all continuous matrix functions can be represented in that limit as well (at the same time letting the poles approach the spectrum). In practice, the poles should be chosen with non-zero imaginary parts in order to avoid the spectrum of \(\mathbf{H}\), which is real since \(\mathbf{H}\) is assumed to be self-adjoint. Therefore, we choose adjoint pole weight pairs \((w_{s},z_{s})\) and \((w_{s}^{*},z_{s}^{*})\) to ensure that \(f_{\theta}\) is real when restricted to real arguments.
Linear scaling costThe pole expansion framework is the first key ingredient in linear scaling electronic structure methods Goedecker (1999) such as PEXSI Lin et al. (2009, 2013). The second ingredient is the selected inversion of the resolvent. Instead of computing the full inverse, \((z\mathbf{I}-\mathbf{H})^{-1}\), one first computes a sparse \(LDL^{*}\) factorization and then selectively computes only the diagonal entries of the resolvent. The bottleneck in this approach is the \(LDL^{*}\) factorization. For a full matrix, it scales as \(O(n^{3})\) operations and \(O(n^{2})\) memory. The complexity improves considerably for sparse matrices. Suppose that the sparsity pattern is \(d\)-dimensional, corresponding to a topologically \(d\)-dimensional graph; e.g. the cumulates in Section 5.1 are topologically one-dimensional despite being embedded in \(\mathbb{R}^{3}\). Using nested dissection ordering to minimize the fill-in, the cost of the \(LDL^{*}\) factorization reduces to \(O(n)\) for \(d=1\) (e.g., natural language processing and quasi-1D molecular systems such as carbon-nano-tubes); \(O(n^{3/2})\) operations and \(O(n\log n)\) memory for \(d=2\) (e.g., image recognition); and \(O(n^{2})\) operations and \(O(n^{4/3})\) memory for \(d=3\). The final step to reduce the computational cost of our architecture to _linear scaling_ is to replace the \(LDL^{*}\) factorization with an incomplete factorization, as proposed by Etter (2020) in the context of electronic structure calculations. They demonstrated that the \(LDL^{*}\) factorization inherits the decay of the resolvents and, therefore, an incomplete factorization can be performed with controllable error. However, this scheme remains a theoretical idea, and incorporating it into our architecture results in a compromise between computational cost and non-locality that requires significant further investigation.
### Expressivity of Matrix Function Networks
Non-local interaction
Purely formally, one may think of a matrix function \(\tilde{f}(\mathbf{H})\) as an infinite power series. This suggests that MFNs are inherently non-local and exhibit a convolution-like structure, similar to message-passing methods. This is of interest for modeling causal relationships or non-local interactions by proxy, such as in chemistry or natural language processing. In these cases, the propagation of local effects over long distances results in multiscale effects that are effectively captured by our method.
The degree of non-locality of the interaction can be precisely quantified. If \(\mathbf{H}\) has a finite interaction range, then the Combe-Thomas theorem Combes & Thomas (1973) implies that \(|(z\mathbf{I}-\mathbf{H})_{i,j}^{-1}|\leq Ce^{-\gamma_{s}d_{ij}}\), where \(\gamma_{z}=\mathrm{dist}(z,\sigma(\mathbf{H}))\) and \(d_{ij}\) is the length of the shortest path from node \(i\) to node
\(j\). Since we have taken \(\mathbf{H}\) self-adjoint with real spectrum, an estimate for \(\gamma_{z}\) is \(\Im z\). As a result, if we constrain the poles in the parameterization of \(f_{\theta}\) to be at \(\Im z_{s}=\gamma\), then the resulting matrix function will satisfy
\[\big{|}[f_{\theta}(\mathbf{H})]_{i,j}\big{|}\leq Ce^{-\gamma d_{ij}}. \tag{16}\]
Therefore, the degree of locality can be controlled by constraining \(\Im z_{s}\) to be some fixed value. While equation 16 only gives an upper bound, it is not difficult to see that it can in fact be attained for a suitable choice of matrix function. In practice, however, \(f_{\theta}\) can be seen as approximating an unknown target \(\tilde{f}\) with learnable weights \(w_{s}\). If \(\tilde{f}\) is analytic in a neighborhood of \(\sigma(\mathbf{H})\), then \(\big{|}[\tilde{f}(\mathbf{H})]_{i,j}\big{|}\leq Ce^{-\tilde{\gamma}d_{ij}}\), where \(\tilde{\gamma}\) measures how smooth \(\tilde{f}\) is (distance of poles of \(\tilde{f}\) to \(\sigma(\mathbf{H})\)). For our parameterization \(f_{\theta}\) we therefore expect the same behavior \(\gamma=\tilde{\gamma}\) in the pre-asymptotic regime of moderate \(d_{ij}\).
Geometric expressiveness
The WL-test quantifies a network's ability to distinguish non-isomorphic graphs. Its extension to graphs embedded in vector spaces is known as the geometric WL-test (Joshi et al., 2023). The expressivity in the following discussion adheres to these definitions. In the context of graphs in \(\mathbb{R}^{3}\), practical relevance is high. A direct equivalence exists between equivariant MPNNs with linear updates and infinite layers and a one-layer MFN due to the matrix function's product structure. For a one-layer MFN with a single matrix channel and two-body matrix entries, the expressiveness matches that of a two-body equivariant MPNN with **infinite layers** and **linear updates**. When matrix entries incorporate features beyond two-body, the one-layer MFN becomes more expressive than its two-body feature counterpart.
### Matrix normalization
Batch normalization (Ioffe and Szegedy, 2015) and layer normalization (Ba et al., 2016) are two techniques widely used in deep learning to stabilize training, improve convergence, and provide some form of regularization.
In the context of MFNs, a similar normalization strategy is needed to ensure stable training and improved convergence. Instead of normalizing the features directly as conventional normalization layers do, we normalize the eigenvalues of a set of H matrices with batch dimension \([\) batch, channels, \(n\), \(n\)], where \(n\) is the size of the matrix. The aim is to adjust their mean and variance to 0 and 1, respectively, much like standardization in traditional batch or layer normalization. For the batch matrix norm, the mean is taken across the batch dimension, and for the layer matrix norm, the normalization is taken across the channel dimension. The mean and the variance of the eigenvalues are computed using the following formulas,
\[\mathbb{E}(\Lambda)=\frac{\text{tr}(\mathbf{H})}{n},\quad\text{Var}(\Lambda) =\frac{\text{tr}(\mathbf{H}^{2})}{n-1}-\frac{\text{tr}(\mathbf{H})^{2}}{n(n-1)} \tag{17}\]
This normalization of eigenvalues ensures that the spectral properties of the graph used for representation in the MFN are effectively standardized, contributing to better training stability and convergence.
### Multivariate Matrix function
As MFNs extract features from multiple operators, it is useful to extend the univariate matrix function in Equation 9 to a multivariate matrix function. The space of multivariate functions \(\mathcal{F}(\mathcal{H}(\mathcal{G})\times...\times\mathcal{H}(\mathcal{G}))\) is isomorphic to the closure of the span of the tensor product of functions of single operators \(\mathcal{F}(\mathcal{H}(\mathcal{G}))^{\otimes n}\), We call the number of variables \(n\), the correlation order of the matrix function. The resolvent expansion can be generalized to the multivariate matrix function, using matrix products of resolvents,
\[f(\mathbf{H}_{1},...,\mathbf{H}_{n})=\frac{1}{\left(2\pi i\right)^{n}}\int \cdots\iint_{\partial D_{1}\times\cdots\times\partial D_{n}}\frac{f(z_{1}, \ldots,z_{n})}{\left(z_{1}\mathbf{I}-\mathbf{H}_{1}\right)\cdots\left(z_{n} \mathbf{I}-\mathbf{H}_{n}\right)}dz_{1}\cdots dz_{n} \tag{18}\]
Multivariate matrix functions create higher order features. A one-layer MFN with a matrix function of correlation order \(n\) is as expressive as a \((n+1)\)**-body** equivariant MPNN with **infinite layers** and **linear updates**. The space \(\mathcal{F}(\mathcal{H}(\mathcal{G}))^{\otimes n}\) is of very high dimension and it is possible to use standard compression techniques to find more efficient approximations such as tensor decomposition (Darby et al., 2023). The use of linear combinations of matrices in Equation 7 approximates some of this space. We leave further exploration of these kinds of network for future work.
### Interpretability of the MFN operators
In the case of the Euclidean group, the matrices learned in MFNs have the same symmetries as Euclidean operators. Euclidean operators play a crucial role in quantum mechanics. They are defined as self-adjoint operators (in the space of square-integrable functions) that are equivariant to the action of the group of rotations, translations, reflections, and permutations. When these operators are expended on the basis of the irreducible representations of the rotation group, they exhibit a block structure (see Appendix A.1) in which each entry is labeled with representations \((ss,ps,pp,dd,...)\).
The most central of these operators is the Hamiltonian operator. The energy of a quantum system is related to the trace of a specific matrix function of the Hamiltonian,
\[E=\text{Tr}f(\mathbf{H}) \tag{19}\]
The Hamiltonian is usually computed as a fixed point of a self-consistent loop that minimizes the energy. This loop introduces many-body non-local effects. This motivates us to use many-body functions to parameterize our matrices and to learn the fixed point directly via learning a matrix function of many matrices. This is in opposition to tight-binding methods that usually construct a single low-body Hamiltonian with physics-inspired functional forms but require several self-consistent iterations to converge.
## 5 Results
### Cumulenes: non-local 3D graphs
We compare the non-locality of MFNs to local MPNNs and global attention MPNNs using linear carbon chains, called cumulenes. Cumulenes are made up of double-bonded carbon atoms terminated with two hydrogen atoms at each end. Cumulenes exhibit pronounced non-local behavior as a result of strong electron delocalization. Small changes in chain length and relative angle between the terminating hydrogen atoms can result in large changes in the energy of the system, as visually represented in Figure 3. These structures are known to illustrate the limited expressivity of local models (Unke et al., 2021) and are similar to the k-chains introduced by Joshi et al. (2023) in the context of the geometric WL test. Frank et al. (2022) showed that global attention is capable of capturing the angular trends of cumulenes with fixed length. We go beyond and demonstrate that MFNs are capable of accurately extrapolating to longer chains, simultaneously capturing length and angle trends. In contrast, global attention models such as Spookynet (Unke et al., 2021), are unable to extrapolate to longer chains, highlighting the benefit of the matrix function formalism. For all MFN models in this section, we use MACE layers (Batatia et al., 2022) to form the matrix at each layer. We refer to the model as MFN (MACE). Details of the training set and the specific choice of parameterization of the matrix entries are included in the Appendix A.2.
**Trends with chain length and rotation** The lowest energy structure of cumulenes alternates between 90- and 0-degree angles for odd and even carbon atom counts, respectively. Consequently, varying the number of carbon atoms at a fixed angle unveils a distinctive zigzag pattern in the ground truth energy (Fig. 3 left panel). Although the local model, with two message passing layers, is trained on exactly these configurations, this system is beyond its expressivity, as can be seen by the linear trend for \(n_{c}>7\) in (Figure 3 left panel). In contrast, the invariant and equivariant MFN models perfectly reproduce density functional theory (DFT) energies, thanks to their inherent non-locality. To
Figure 2: **Block structure** of a Euclidean Operator, H, learnt in the MFNs. Each entry in H corresponds to a different product of representations of the group of rotations \((ss,sp,pp,...)\).
demonstrate the necessity of equivariance, we train models on the energy of a fixed size cumulene as a function of the dihedral angle between the hydrogen atoms. Figure 3 demonstrates that only the equivariant MFN (L=1) captures the angular trend.
**Guaranteed Non-Local dataset** Datasets designed to test non-local effects often yield unexpectedly high accuracy when evaluated with local models (Kovacs et al., 2023), complicating the assessment of model non-locality. The dataset introduced here is based on cumulenes, whose strong electronic delocalization results in a directly observable non-locality. The training set contains geometry-optimized cumulenes with 3-10 and 13, 14 carbon atoms, which are then rattled and rotated at various angles. The test set contains cumulenes created in a similar fashion with the same number of carbons (in-domain) and cumulenes of unseen length, not present in the dataset (out-domain 11,12 and 15,16). Table 1 shows that the MFN architecture significantly outperforms both local and attention-based models (Spookynet). Attention captures some non-locality, resulting in marginally lower errors on the train and in-domain test set. However, the learned non-locality does not generalize to larger molecules, obtaining energy and forces worse than those obtained with a simple local model. The structured non-locality of MFN enables generalization to larger unseen system sizes.
### Performance on pure graphs
In this section, we evaluate the performance of our MFNs models in graph-level prediction tasks using GCN layers for the matrix construction. Detailed of the datasets and hyperparameters of the models can be found in the Appendix A.2.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Dataset** & \(n_{C}\) & \multicolumn{3}{c}{**E (meV/atom)**} & \multicolumn{3}{c}{**F (meV/A)**} \\ & \multicolumn{1}{c}{MACE} & SpoodyNet & MFN (MACE) & MACE & SpoodyNet & MFN (MACE) \\ \hline Train & 3-10,13,14 & 41.1 & 31.4 & **2.0** & 179.6 & 114.1 & **31.7** \\ Test (In Domain) & 3-10,13,14 & 41.8 & 30.8 & **2.6** & 205.6 & 162.3 & **34.0** \\ Test (Out Domain) & 11,12 & 16.5 & 31.4 & **0.9** & 108.5 & 116.2 & **22.5** \\ Test (Out Domain) & 15,16 & 12.0 & 23.4 & **2.6** & 72.1 & 87.6 & **37.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Guaranteed non-local cumulene dataset** containing rattled cumulene chains, with various chain lengths (\(n_{C}\)) and hydrogen dihedral angles (\(\phi\)). The table compares energy (E) and forces (F) RMSEs between local two-layer MPNNs (MACE), global attention MPNNs (SpookyNet), and equivariant MFNs. Train and in-domain test sets contain cumulenes of lengths 3-10 and 13,14. Two out-domain test sets compare different levels of extrapolation to unseen cumulene lengths, containing cumulenes with 11, 12 and 15, 16 carbon atoms, respectively. Bold is best and underline second best.
Figure 3: **Visualizing MFN expressivity on cumulene chains.** The left panel depicts energy trends with respect to cumulene chain length at a fixed angle \(\phi=5^{\circ}\). The right panel shows the DFT (ground truth) and the predicted energy as a function of the dihedral angle \(\phi\) between the hydrogen atoms for a cumulene chain containing 12 carbon atoms. Local many-body equivariant models (MACE) are only able to capture average trends, even though test configurations are included in the training set. Invariant MFNs (\(L=0\)) capture only the trends with respect to length, while equivariant MFNs (\(L=1\)) capture both non-local trends. All models have a cutoff distance \(r_{c}\) of 3Å, corresponding to the nearest neighbors, with two message-passing layers. The cutoff distance as well as MACE’s receptive field for the first carbon atom is annotated in the left panel.
**ZINC.** We use the default dataset splits for ZINC, aligned with the leaderboard baseline settings, with approximately 500K parameters set. Table 2 shows that MFN surpasses previous architectures, demonstrating the utility of learning various operators, even on pure graphs.
**TU Datasets.** We test our model on five TUDataset datasets involving both bioinformatics datasets (MUTAG, ENZYMES, PTC MR, and PROTEINS) and a social network dataset (IMDB-B). To ensure a fair comparison with baselines, we follow the standard 10-fold cross-validation and dataset split in table 3.
## 6 Conclusion
We have introduced Matrix Function Networks (MFNs), an architecture designed to address the limitations of existing GNNs and MPNNs in modeling non-local many-body interactions. Utilizing a resolvent expansion, MFNs achieve potentially linear scaling with respect to system size, offering a computationally efficient solution. Our evaluations indicate state-of-the-art performance on ZINC and TU graph datasets without human-designed features to capture the global graph topology. We also demonstrate that our architecture is capable of modeling the complex non-local interactions of cumulene quantum systems. Future work could focus on extending MFNs to other complex systems, further validating its adaptability and efficiency, and exploring its interpretability.
### Reproducibility Statement
To ensure reproducibility and completeness, we include detailed descriptions of the model used, hyperparameters, and data sets in the Appendix. The ZINC and TU datasets are publicly available. We also attach our cumulene datasets in a supplementary. The code will be made public.
#### Acknowledgments
CO's work was supported by the NSERC Discovery Grant IDGR019381 and the NFRF Exploration Grant GR022937. LLS acknowledges support from the EPSRC Syntech CDT with grant reference EP/S024220/1. Computational resources were provided by the Cambridge Service for Data Driven Discovery (CSD3), which was accessed through the University of Cambridge EPSRC Core Equipment Award EP/X034712/1.
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline Method & GCN & GAT & MPNN & GT & SAN & Graphomer & PDF & MFN (GCN) \\ \hline MAE & 0.367\(\pm\)0.011 & 0.384\(\pm\)0.007 & 0.145\(\pm\)0.007 & 0.226\(\pm\)0.014 & 0.139\(\pm\)0.006 & 0.122\(\pm\)0.006 & 0.066\(\pm\)0.004 & **0.063\(\pm\)0.002** \\ \#para & 50\% & 531k & 481k & NA & 50\% & 48\% & 472k & 512k \\ \hline \end{tabular}
\end{table}
Table 2: Results on ZINC with the MAE and number of parameters used, where the best results are in bold. Baselines are taken from (Yang et al., 2023) and model citations are in A.2.2.
\begin{table}
\begin{tabular}{l|c c c c c} \hline Method & MUTAG & ENZYMES & PTC\_MR & PROTEINS & IMDB-B \\ \hline GK & 81.52\(\pm\)2.11 & 32.70\(\pm\)1.20 & 55.65\(\pm\)0.5 & 71.39\(\pm\)0.3 & - \\ RW & 79.11\(\pm\)2.1 & 24.16\(\pm\)1.64 & 55.91\(\pm\)0.3 & 59.57\(\pm\)0.1 & - \\ PK & 76.0\(\pm\)2.7 & - & 59.5\(\pm\)2.4 & 73.68\(\pm\)0.7 & - \\ AWE & 87.87\(\pm\)9.76 & 35.77\(\pm\)5.93 & - & - & 74.45\(\pm\)5.80 \\ \hline PSCN & 88.95\(\pm\)4.4 & - & 62.29\(\pm\)5.7 & 75\(\pm\)2.5 & 71\(\pm\)2.3 \\ ECC & 76.11 & 45.67 & - & - & - \\ DGK & 87.44\(\pm\)2.72 & 53.43\(\pm\)0.91 & 60.08\(\pm\)2.6 & 75.68\(\pm\)0.5 & 66.96\(\pm\)0.6 \\ GraphSAGE & 85.1\(\pm\)7.6 & 58.2\(\pm\)6.0 & - & - & 72.35\(\pm\)3.5 \\ CapsGNN & 88.67\(\pm\)6.88 & 54.67\(\pm\)5.67 & - & 76.2\(\pm\)3.6 & 73.14\(\pm\)1.8 \\ GIN & 89.4\(\pm\)5.6 & - & 64.6\(\pm\)7.0 & 76.2\(\pm\)2.8 & 75.1\(\pm\)5.1 \\ \(k\)-GNN & 86.1 & - & 60.9 & 75.5 & 74.2 \\ IQN & 83.89\(\pm\)12.95 & - & 58.53\(\pm\)6.86 & 76.58\(\pm\)5.49 & 72.0\(\pm\)5.54 \\ PPGNN & 90.55\(\pm\)8.7 & - & 66.17\(\pm\)6.54 & **77.20\(\pm\)4.73** & 73.0\(\pm\)5.77 \\ GCN\({}^{2}\) & 89.39\(\pm\)1.60 & - & 66.84\(\pm\)1.79 & 71.71\(\pm\)1.014 & 74.80\(\pm\)2.01 \\ PDF & 89.01\(\pm\)4.35 & **73.50\(\pm\)6.39** & 68.36\(\pm\)8.38 & 76.28\(\pm\)5.1 & **75.60\(\pm\)2.69** \\ \hline MFN (GCN) & **91.5\(\pm\)7.35** & 72.9\(\pm\)7.55 & **68.9\(\pm\)8.09** & 76.18\(\pm\)4.07 & 74.1\(\pm\)1.04 \\ \hline \end{tabular}
\end{table}
Table 3: Results on TUDataset (Higher is better). Bold is best, and underlined second best within \(\pm 0.5\%\). Baselines are taken from (Yang et al., 2023) and model citations are in A.2.3
## Ethics Statement
FAF is employed by AstraZeneca at time of publication; however, none of the work presented in this manuscript was conducted at or influenced by this affiliation.
|
2307.01684 | Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT
Services | Graph Neural Networks (GNNs) have gained growing interest in miscellaneous
applications owing to their outstanding ability in extracting latent
representation on graph structures. To render GNN-based service for IoT-driven
smart applications, traditional model serving paradigms usually resort to the
cloud by fully uploading geo-distributed input data to remote datacenters.
However, our empirical measurements reveal the significant communication
overhead of such cloud-based serving and highlight the profound potential in
applying the emerging fog computing. To maximize the architectural benefits
brought by fog computing, in this paper, we present Fograph, a novel
distributed real-time GNN inference framework that leverages diverse and
dynamic resources of multiple fog nodes in proximity to IoT data sources. By
introducing heterogeneity-aware execution planning and GNN-specific compression
techniques, Fograph tailors its design to well accommodate the unique
characteristics of GNN serving in fog environments. Prototype-based evaluation
and case study demonstrate that Fograph significantly outperforms the
state-of-the-art cloud serving and fog deployment by up to 5.39x execution
speedup and 6.84x throughput improvement. | Liekang Zeng, Xu Chen, Peng Huang, Ke Luo, Xiaoxi Zhang, Zhi Zhou | 2023-07-04T12:30:01Z | http://arxiv.org/abs/2307.01684v1 | # Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services
###### Abstract
Graph Neural Networks (GNNs) have gained growing interest in miscellaneous applications owing to their outstanding ability in extracting latent representation on graph structures. To render GNN-based service for IoT-driven smart applications, traditional model serving paradigms usually resort to the cloud by fully uploading geo-distributed input data to remote datacenters. However, our empirical measurements reveal the significant communication overhead of such cloud-based serving and highlight the profound potential in applying the emerging fog computing. To maximize the architectural benefits brought by fog computing, in this paper, we present Foggraph, a novel distributed real-time GNN inference framework that leverages diverse and dynamic resources of multiple fog nodes in proximity to IoT data sources. By introducing heterogeneity-aware execution planning and GNN-specific compression techniques, Foggraph tailors its design to well accommodate the unique characteristics of GNN serving in fog environments. Prototype-based evaluation and case study demonstrate that Foggraph significantly outperforms the state-of-the-art cloud serving and fog deployment by up to 5.39\(\times\) execution speedup and 6.84\(\times\) throughput improvement.
Fog computing, Graph Neural Networks, model serving, distributed processing
## I Introduction
Graphs are ubiquitous. Given the intuitionistic abstraction on relational structures, graphs drive the organization and computation of miscellaneous real-world data such as traffic sensory networks [1, 2], online social graphs [3, 4], and power grids [5, 6]. To facilitate deep learning using such form of data, recent advances in neural networks have extrapolated to the graph domain, resulting in a new stream of models called Graph Neural Networks (GNNs).
GNNs differ from traditional Deep Neural Networks (DNNs) by integrating graph embedding techniques with convolutions [7, 8, 9]. In essence, GNNs leverage an iterative aggregation to an input graph and, through neural network operators, to capture hierarchical patterns from subgraphs of variable sizes. This enables the model to learn the properties for specific vertices, edges, or the graph as a whole, and generalize to unobserved graphs. Benefited from such powerful expressiveness, GNNs achieve superior prediction performance in various graph-related tasks, and have emerged as a powerful data-driven tool for enabling a multitude of real-world IoT-driven smart applications, _e.g._, traffic flow forecasting [10, 11], location-based recommendation [12, 13], and vehicle trajectory prediction [14, 15].
To render smooth services for these applications, the _de facto_ standard methodology is to offload raw data and computation to central cloud servers [16, 17]. For instance, in Fig. 1, the massive sensory data from IoT devices are fully uploaded (in physical domain) and their corresponding GNN input graph (in data domain) is computed at a remote cloud. While this paradigm may act well for many CNN-based image processing tasks [18, 19, 20], however, it can reach suboptimal performance for GNN model serving due to its unique input characteristics. First, the input graph of GNN typically spans geographically with scattered data sources, _e.g._ the IoT sensory devices, as vertices. Unlike image or video from a single source, to obtain the complete input for one inference, GNN execution is obliged to wait until all correlated data points arrive, which considerably prolongs the total serving latency. Second, as the graph scales and the number of vertices increases, the input data size grows linearly and can become much larger than an ordinary CNN inference input, emphasizing the communication stress. Worse still, the transmission cost is further magnified due to the long transmission delay of Wide Area Network (WAN) and potential network congestion. Specifically, as we will show later in SSII-C, the data uploading phase could dominate the whole procedure by consuming \(>\)95% latency in a typical cloud-based GNN serving.
To tame such intractability, a promising solution is to exploit available computing resources in proximity to data sources
Fig. 1: An example scenario of GNN serving at the network edge. Instead of entirely offloading IoT sensory data to the cloud via the delay-significant Internet, Foggraph processes GNN workloads over fog nodes in proximity to enable real-time GNN serving.
with the emerging fog computing1 paradigm [21]. Concretely, as Fig. 1 illustrates, we can sink GNN workload from remote cloud into vicinal fog nodes2 (_e.g._ 5G fog servers) and manage data collection and computation within the Local Area Network (LAN). Consequently, the avoidance of unreliable WAN connections allows observably lower communication overhead, reducing at most 67% data collection latency in our experimental measurements. In brief, fog computing exhibits prospective potential for real-time GNN serving at the network edge.
Footnote 1: In the terminology of some literature, _fog computing_ also refers to _edge computing_. Since _edge_ has represented the links in graphs, throughout this paper, we use the term _fog_ to avoid ambiguity.
Footnote 2: We exclusively use _node_ to denote a fog server and leave _vertex_ for graphs.
Nevertheless, despite the advantages, efficient fog deployment yet suffers from a set of challenges. First, different from the cloud that takes computing resources as a whole, the fog environments usually consist of loosely coupled nodes [22]. To adapt complex GNN processing over them, a distributed counterpart is required, where input data need to be judiciously placed and routed to respective fog nodes for distributed execution. Second, fog environments are inherently heterogeneous [23], _e.g._ with computing facilities ranging from small-size gateways [24] to powerful cloudlets [25]; their available bandwidth allocated for serving also vary. To exploit the maximum parallelization from the diversity, a heterogeneity-aware data placement strategy with effective load balancing is highly desired. Further complicating the problem is dynamic factors like fog nodes' load levels, network conditions, _etc._, which may dramatically decline the performance of the whole pipeline. Unfortunately, existing GNN serving mechanisms cannot sufficiently meet these requirements.
To this end, we present Fograph, a novel distributed system that enables real-time GNN inference over multiple heterogeneous fog nodes. Fograph's contribution goes beyond merely applying fog computing to boost GNN serving, instead it addresses the above challenges at four levels. First, from an execution perspective, a holistic distributed workflow is introduced for enabling fog nodes to collaboratively serve GNN inference. Second, to attain efficient runtime, an inference execution planner is designed to optimize the data placement of the input graph, along with a GNN-oriented profiling methodology that allows accurately characterizing heterogeneous computing capabilities. Third, to alleviate the communication bottleneck and ameliorate the overall performance, a novel graph data packing technique is applied that leverages the topological properties and compresses transferred data with minimal impact on accuracy. Finally, to adapt to dynamic changes such as load fluctuation, a dual-mode workload scheduler is developed which progressively adjusts the graph data placement in order to acquire the best-performing configuration. Extensive evaluation against multiple benchmarks demonstrates Fograph's superior performance gain over traditional cloud serving and straw-man fog counterpart. In summary, this work makes the following key contributions.
* An empirical study on GNN serving latency with existing cloud and basic fog mechanisms. By breaking down the overheads of communication and execution, we observe a major cost reduction on the communication side, highlighting utilizing fog computing as a promising optimization opportunity (SSII).
* A regularized workflow for fog-enabled GNN serving that covers the full lifecycle from offline configuration to online data collection and distributed runtime. Data parallelism is applied and retrofitted by leveraging the execution characteristics of GNN inference in order to collaborate multiple fog nodes (SSIII-E).
* A heterogeneity-aware distributed GNN inference system Fograph that enables real-time performance. Reflecting on the diverse and fluctuated fog resources, we design an inference execution planner with load balancing for maximum parallelization (SSIII-B, SSIII-C), and a dual-mode workload scheduler to accommodate dynamics (SSIII-F).
* A GNN-specific packing mechanism that exploits the reduced-precision resilience and sparsity of GNN workloads to minimize data uploading overhead. Our communication optimizer combines a lossless compressor and a degree-aware lossy quantizer, which exposes previously unattainable designs for distributed GNN inference, while not sacrificing the prediction accuracy of the system (SSIII-D).
* A comprehensive evaluation of Fograph using multiple benchmarks, demonstrating its superiority over state-of-the-art cloud serving and straw-man fog deployment by up to 5.39\(\times\) execution speedup and 6.84\(\times\) throughput improvement (SSIV).
## II Background and Motivation
### _Graph Neural Networks_
The real-world graphs typically contain two kinds of data. One is the adjacency matrix, implying the global structural information, and the other is the feature vectors that describe vertices and edges' physical properties. GNNs take both data as input and learn a representation vector, called _embedding_ or _activation_, for each vertex. The learned representation can be used for downstream tasks such as vertex clustering, link prediction, and graph classification [7].
In Fig. 2, we illustrate a two-layer GNN instance from the perspectives of data flow during inference process. Fig. 2(a) depicts the input graph, with vertex \(A\) and its one-hop and two-hop neighbors in different color. Fig. 2(b) unfolds a layer's detail operations. Essentially, each GNN layer collectively aggregates the neighbor vertices' activations from the previous
Fig. 2: An GNN inference instance of computing vertex \(A\)’s embedding through two GNN layers.
layer's output, and then updates the target vertex's activation using a neural network operator such as convolution or multi-layer perceptron. Within the same layer, all vertices share the same weights in Aggregate and Update functions, while different layers may differ them. To compute embeddings through a \(K\)-layer GNN, vertices should retrieve information from their \(K\)-hop neighbors. Formally, the computation of the \(k\)-th GNN layer on vertex \(v\) can be described as:
\[a_{v}^{(k)} =\texttt{Aggregate}(\{h_{u}^{(k-1)}|u\in\mathcal{N}_{v}\}), \tag{1}\] \[h_{v}^{(k)} =\texttt{Update}(a_{v}^{(k)},h_{v}^{(k-1)}), \tag{2}\]
where \(h_{v}^{(k)}\) is the representation vector of vertex \(v\) at the \(k\)-th layer, \(h_{v}^{(0)}\) is initialized by the input features of \(v\), and \(\mathcal{N}_{v}\) denotes the vertices set of \(v\)'s direct neighbors.
**Examples.** Table I lists three popular GNN models to exemplify the above two functions. GCN [26] is one of the first graph learning models that bridge the gap between spectral transformation and spatial convolutions. Its Aggregate simply uses a summation and the Update puts weighted aggregation to an element-wise nonlinearity \(\sigma(\cdot)\). GAT [27] is the representative of another GNN category that incorporates attention mechanism into feature propagation. Its inference directly uses the learned attention parameters \(\alpha_{vu}^{(k)}\) to weight neighbors and passes the aggregation through the nonlinearity for output. GraphSAGE [28] is recognized as the classic inductive GNN variant. While its training adopts sampling-based techniques to trade accuracy with training speed, its inference fully collects the neighbor sets for aggregation and update. Here in Table I we formalize its mean aggregate version.
### _Emerging Real-Time GNN Applications_
GNNs have been submerged in many scenarios with real-time responsiveness requirements, particularly for many emerging IoT-enabled smart applications. In the following, we provide several motivating examples.
**Traffic flow forecasting.** Accurately forecasting traffic speed, volume or the density of roads is the fundamental problem in Intelligent Transportation Systems (ITS). To support such intelligence, GNNs construct spatial-temporal models to perform graph-level predictions. For instance, some models [10, 29, 30] consider traffic sensory networks as a spatial-temporal graph where the vertices are roadside detectors and the edges are roads. Each vertex is attached with a time-varying vector that records immediate properties such as traffic speed and occupancy. In such circumstances, timely prediction is of paramount importance given the speedily changing traffic and its publicly wide impacts, which requires real-time GNN processing.
**Location-based recommendation.** Recommending yet-unvisited Point of Interest (POI) to potentially interested users has been a core function for many commercial mobile applications (_e.g._ Airbnb, TripAdvisor). To utilize the rich semantic information like geographical constraints and social influences, a number of works [12, 31, 13] have built upon GNN models. Several graphs are created in these systems, including a spatial graph of geo-distributed POIs, a social graph of users, and a bipartite graph connecting POIs and users based on historical consuming records. Such kind of services typically exhibit a soft-realtime necessity - if the recommendation comes late, the results can be out of date as the user may have moved to other locations. In other words, low latency GNN inference is demanded for rendering effective user experience.
**Vehicles trajectory navigation.** Autonomous robotics has become a hot spot in recent years, and efficient and collision-free trajectory navigation is a key technology to ensure its mobility [32]. As an example, in precision agriculture [33, 34], a fleet of autonomous drones move along fields to measure and maintain the crops' health, spraying pesticides if there is an infestation. GNN-based methods [32, 35] enhance this procedure by mapping the vehicles as graphs and performing inference to help plan paths instantaneously. Each drone, as a vertex, captures sensory data (for flight height, ambient light intensity, _etc._) every few seconds as features. Any delay of the control may result in catastrophic crashes of the vehicles, for which fast inference is needed.
### _Examining GNN Serving Pipeline_
This subsection examines the serving latency of _de facto_ standard cloud serving and a vanilla fog deployment to investigate how much performance fog computing can promote.
**Methodology.** The measurement targets a location-based recommendation application [36] that runs GCN inference on the SIoT dataset [37] (dataset details in Table III and SSIV-A). The used graph includes 16216 devices from Spain as vertices with 146117 social connections, and each vertex attaches a 52-dimension feature that identifies its properties such as the device's type and brand. Initially, the data are randomly divided into equal parts and assigned to 8 Raspberry Pis. During the measurement, we launch the Pis to simultaneously send the respective graph data via 4G/5G/WiFi network, and then perform inference on the cloud/fog based on _PyTorch Geometric (PyG)_[38] once the complete graph is received. The cloud server is an Aliyun instance (8vCPUs \(\mid\) 32GB \(\mid\) Tesla V100 GPU \(\mid\) 1\(\mid\) Ubuntu 16.04) located in the nearest region, and its geographical distance to the Pis is about 200km. The fog cluster consists of six heterogeneous servers (specifications in SSIV-A) as computing nodes, all set on the same campus as the Pis. In particular, for single-fog serving, we select the
most powerful one to execute; for multi-fog serving, we apply the state-of-the-art technique in [39] to place the input data among fog nodes and perform collaborative execution. During the runtime, each node maintains a local graph, computes GNN layers, and exchanges vertices data with each other when needed. The 4G/5G network employs commercial operator services, where the 5G network is provided by the 5G base stations surrounded nearby under the non-standalone (NSA) mode.
**Measurements reported.** Fig. 3 (left) shows the serving latency of the cloud, single-fog and multi-fog mechanisms under different networking settings, and Fig. 3 (right) breaks down stage-wise costs in terms of data collection and inference execution. Regarding the load distribution in the multi-fog serving, Fig. 4 visualizes the number of assigned vertices and the execution latency of each fog node.
**Key observations.** First, the fog approaches enjoy better performance than the cloud alternative, demonstrating its efficiency in vicinal serving. Quantitatively, for 4G, 5G, and WiFi, the single-fog approach achieves 1.65\(\times\), 1.73\(\times\), and 1.40\(\times\) speedups over the cloud serving, respectively. The multi-fog counterpart attains even lower latency than the single-fog. The weaker the networking condition is, the more superiority the fog serving reaps.
Second, we observe that the favour of fog serving is mainly contributed by communication. As evidence, when switching from cloud to single-fog, the data collection latency can be reduced by 64%, 67%, and 61% under 4G, 5G, and WiFi, respectively. Such a similar degree of reductions implies the consistent advantages gained by the avoidance of remote Internet data transfer. Surprisingly, multi-fog serving achieves lower costs in data collection than single-fog. It is because employing more fog nodes provides more access points and therefore widens the bandwidth and relieves the networking contention. Nonetheless, data collection still occupies \(>\)50% costs in both fog approaches as in Fig. 3 (right), suggesting that communication is yet the major spending factor in the serving pipeline.
Third, while the fog data collection significantly saves the overhead, its execution can dramatically compromise the benefit. Nearly half of the costs are taken by execution in single-fog serving, while that in the cloud is \(<\)2%. Multi-fog serving alleviates that, but only reduces 33% execution cost upon single-fog with five more fog nodes. Such inferior performance indicates poor resource utilization, which comes from the gap between equally assigned data placement and heterogeneous computing resources. This can be clearer from the measurements in Fig. 4, where the existing data placement strategy merely yields an equilibrium in the number of assigned vertices but a severe imbalance in actual load distribution.
### _Opportunities and Challenges with Fog Computing_
The advanced ability of GNNs has spread themselves to a wide range of end adoptions with real-time requirements. Fog computing, as evident by the above realistic measurements, is revealed to be a potentially effective solution to address it, with both opportunities and challenges.
**Opportunities.** By relocating execution to the approximate computing nodes close to the data sources, fog computing manages all data communication within a local network and thus avoids the unreliable and delay-significant Internet connections. Such architectural wisdom, as proved by the above empirical measurements, is translated effectually into the reduction of communication time. In addition, the multi-node fog cluster provides more space for parallel GNN processing, which can further accelerate the serving pipeline.
**Challenges.** In spite of the opportunities, simply adopting fog computing is not competent. First, to effectively exploit the fog resources, it is imperative to accurately characterize the fog nodes' heterogeneity, decide the graph data placement, and orchestrate the data flow during the runtime. Second, regarding the major contribution of transmission cost, a communication-friendly compression technique is desired for expedited graph data collection. Third, to enable resilient serving in real-time, the system should be able to react dynamically to resource fluctuation.
## III Fograph System Design
As aforementioned, the objective is to fully unleash the architectural benefits of fog computing in rendering real-time GNN serving while adapting to fog nodes' heterogeneity and dynamic changes. To this end, we propose Fograph, a distributed GNN inference system. In what follows, we present Fograph modules following its workflow.
### _Workflow and Design Overview_
Fig. 5 and Fig. 6 show the high-level view of Fograph's workflow and system design, respectively, where the five modules in Fig. 6 work for the five steps in Fig. 5 correspondingly. In the setup phase, Fograph obtains a GNN model and uses the calibration dataset to sketch the computing capabilities of the heterogeneous fog nodes. This is accomplished by the offline profiler (Fig. 6), which builds latency estimation models for predicting the GNN performance. Moreover, it
Fig. 4: The number of vertices (left) and the inference execution latency (right) of each fog node in multi-fog serving.
Fig. 3: The latency measurements of three types of GNN serving (left) and their stage-wise breakdown (right).
records the static properties of the historical input like the adjacency matrix, acting as the initial graph skeleton. A dedicated fog node is selected as the metadata server that is responsible for registering metadata from all available fog nodes (SSIII-B). Next, Fograph's execution planner (SSIII-C ) applies the latency estimates and judiciously schedules a graph data placement to match the heterogeneity among fog nodes, aiming at maximum parallelization with effective load balance guarantee.
In the runtime phase, the participated fog nodes individually collect their assigned data partitions in light of the execution plan. To speed up device-to-fog data transfer, a novel GNN-specific compression technique is employed to exploit the features sparsity and GNN's resilience to progressively reduce data uploading costs (SSIII-D ). Once input graph data completely arrive, Fograph's runtime orchestrates distributed inference execution, handling all data exchange between fog nodes (SSIII-E ). Simultaneously, each fog node's online profiler monitors its resident execution across inferences, as well as the runtime conditions, and updates the offline performance profile, periodically transferring to the metadata server for execution plan refinement (SSIII-E ). In this way, the system can adapt to dynamic-changing environments, reconfigure its execution and maintain real-time serving.
### _Metadata Acquisition and Registration_
The aim of _metadata registration_ (Fig. 5 ) is to readily provision fundamental serving configurations and sensibly characterize the heterogeneity of fog nodes, providing necessary materials for the subsequent execution planning. To achieve that, we design a dynamic profiler, operating across the setup phase and the runtime phase.
**Setup phase.** Before deployment, the offline profiler (Fig. 6 ) performs two kinds of measurements, device-independent and device-specific. The former focuses on the static configurations stated by service providers. Concretely, it comprises 1) available bandwidth of fogs, 2) the employed GNN model (trained in advance), and 3) the invariant metrics of the input graph. Here we identify the invariance as the adjacency matrix/list that depicts the graph topology, and the size of a feature vector, which is determined once a given GNN model is trained. For instance, in smart transportation applications [1, 29, 30], we can interpret them as the traffic monitoring sensors' logical topology (_e.g._ sensors as vertices and roads as edges) and the form of sensory records, both of which are known before runtime. These parameters are independent of the running platforms and thus can be profiled only once for a given model.
For the latter, we intend to establish latency estimation models that are specific to each fog node, targeting quantifying their heterogeneous computing capability. The performance of computing GNN inference, however, relies heavily on the fundamental settings such as the underlying hardware and the used DL framework, where trivial estimations based on static configurations are rough and unfaithful. Therefore, to build performance models in a precious granularity, we employ a proxy-guided profiling process: First, we construct a calibration set by uniformly sampling subgraphs of varying cardinality from the initial graph. The cardinality, defined as \(\langle c\rangle=\langle|\mathcal{V}|,|\mathcal{W}_{\mathcal{V}}|\rangle\), shapes a subgraph's size from a GNN perspective with the number of vertices it includes and their one-hop neighbors. To reserve the actual degree distribution, for each cardinality axis we collect a group of 20 samples. Next, we measure the average execution latency for each fog node by passing the GNN through the calibration set, and build regression-based latency estimation models \(\omega\), _e.g._ linear regression model in Eq. (3) where \(\beta\) and \(\varepsilon\) are regression parameters.
\[\text{latency}=\omega(\langle c\rangle)=\beta\cdot\langle|\mathcal{V}|,| \mathcal{N}_{\mathcal{V}}|\rangle+\varepsilon. \tag{3}\]
**Runtime phase.** At run time, the profiler keeps tracking the execution time of each fog node to update the offline estimates and derives the balance indicators in order to gauge the global performance. To keep the profiler lightweight, instead of adopting a more accurate but prohibitively costly estimator, we employ a two-step linear estimation to predict the inference latency on the fly. In the first step, the profiler measures the actual execution time of the local \(c\)-cardinality graph, denoted by \(T_{(c)}^{\text{real}}\), during each inference. Next, it calculates a load factor \(\eta\) as the ratio between the actual time and the offline
Fig. 5: Fograph workflow overview. In the setup phase, Fograph first profiles and registers metadata, _e.g._ model parameters and capability estimates, to the metadata server, and next decides a graph data placement among fog nodes through the execution planner. In the runtime phase, fog nodes collect graph data from devices, and perform collaborative execution for inference results. Simultaneously, Fograph monitors the load variation of fog nodes and dynamically adjusts the data placement to ensure resilient efficient serving.
Fig. 6: Fograph design overview, where the five modules correspondingly work for the five steps in Fig. 5.
latency estimate of cardinality \(c\), _i.e._\(\eta=\frac{T_{c}^{\text{enc}}}{\omega(c)}\). As the second step, the profiler treats the load factor as a reflection on the present load level, and uses it to predict the latency of all other cardinalities. Thus, the latency of a different cardinality \(c^{\prime}\) is estimated as \(\eta\cdot\omega(\langle c^{\prime}\rangle)\).
### _Inference Execution Planning_
Given a GNN model, Fograph exploits data parallelism to distribute the inference workloads over multiple fog nodes, where input data needs to be divided and distributed. To attain high-performing serving, an _inference execution planner_ (IEP, Fig. 6 ) is developed to schedule data placement ahead of runtime.
**Problem formulation.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) define a GNN input graph, where \(\mathcal{V}\) and \(\mathcal{E}\) are the set of vertices and edges, respectively. Suppose a set \(\mathcal{F}\) of \(n\) fog nodes are available in serving, denoted by \(\langle f_{1},f_{2},\cdots,f_{n}\rangle\). Each vertex \(v_{i}\in\mathcal{V}\) is a data source point (_e.g._ a sensor), and its placement to a certain fog \(f_{j}\) is specified by a binary variable \(x_{ij}\in\{0,1\}\). While a fog admits multiple vertices, a vertex can only be placed to exactly one fog, _i.e._,
\[\sum_{j}x_{ij}=1,\ \forall v_{i}\in\mathcal{V}. \tag{4}\]
To reckon the cost of a fog's data collection process, we should tally the transmission latency of all vertices placed to it, as in Eq. (5), where \(\varphi\) is the data size of a single vertex's feature vector and \(b_{j}\) indicates the \(f_{j}\)'s available bandwidth for serving.
\[t_{j}^{\text{colle}}=\frac{\sum_{i}x_{ij}\varphi}{b_{j}},\ \forall f_{j}\in \mathcal{F}. \tag{5}\]
To calculate the inference execution latency on \(f_{j}\), we summarize its placed vertices as a subgraph \(\bigcup_{i}x_{ij}v_{i}\), and estimate its computing time through the performance model \(\omega(\cdot)\) from metadata. Besides, for the complete execution runtime, there are additionally \(K\) synchronizations for cross-fog data exchange through a \(K\)-layer GNN, due to GNN's neighbor aggregation mechanism as discussed in SSII-A. Assuming the cost of a synchronization is \(\delta\), we append \(K\delta\) to complement the total execution cost, as in Eq. (6).
\[t_{j}^{\text{exec}}=\omega_{j}(\bigcup_{i}x_{ij}v_{i})+K\delta,\ \forall f_{j}\in \mathcal{F}. \tag{6}\]
Putting both data collection and inference execution together, the objective of IEP is to find an efficient data placement strategy \(\pi=\{x_{ij}|\forall v_{i}\in\mathcal{V},\forall f_{j}\in\mathcal{F}\}\) such that the latency of the complete serving pipeline is minimized, formally formulated in problem \(\mathcal{P}\):
\[\mathcal{P}:\text{min}\ \text{max}_{j}(t_{j}^{\text{colle}}+t_{j}^{ \text{exec}}), \tag{7}\] \[\text{s.t.}\ (4),(5),(6).\]
**Theorem 1**.: _The IEP problem \(\mathcal{P}\) is NP-hard when the number of fog nodes \(n\geq 2\)._
The quality of this data placement matters in that 1) uneven assignment usually catalyzes the straggler effect [40] and 2) skew load distribution commonly accompanies communication bottlenecks, and either of them can largely slow down the parallel performance as measured in SSII-C. However, to yield an optimal solution of \(\mathcal{P}\) is intractable when the number of fogs \(n\geq 2\), due to its NP-hardness stated in Theorem 1 (proof in Appendix A). Unfortunately, the integration of the unique computation pattern enforced by GNN workload and the inherent heterogeneity of fog nodes makes existing data placement techniques hardly be applied.
**IEP data placement.** To enable real-time GNN serving among fogs, we alternatively leverage heuristics to make efficient optimization. Specifically, we capitalize two insights in IEP: 1) Co-locating vertices with connections to a mutual data partition can not only save its computing time \(\omega_{j}(\bigcup_{i}x_{ij}v_{i})\) on a single fog but alleviate the synchronization burden \(K\delta\) in Eq. (6). This knob attributes to GNN's neighbor aggregation mechanism, where computing inference over a data partition is essentially operating on a neighbor-augmented graph upon the vertices it contains. Maximizing the vertices' locality within a data partition can thus significantly decrease the number of neighbors \(|\mathcal{N}_{\mathcal{V}}|\), so that the computing time (refer to Eq. (3)) and synchronization costs (for pulling neighbors from other fogs) are lessened simultaneously. 2) In light of the parallel nature of \(\mathcal{P}\), a serving-oriented load balance with regards to heterogeneous computing capability and diverse bandwidth can profitably promote the holistic performance. Particularly, the costs of data collection \(t_{j}^{\text{colle}}\) and inference execution \(t_{j}^{\text{exec}}\) should be jointly considered to maximize the utilization of available resources. Motivated by these two insights, we tackle \(\mathcal{P}\) via a two-step optimization that first preprocesses the input graph to generate locality-maximized partitions, and next maps them to fog nodes accounting for both computation and communication resources.
Fig. 7 depicts its overall flow in IEP and Algorithm 1 shows the pseudocode. First, we intend to generate data partitions over the input graph, aiming at both internal vertex locality and load balancing. Instead of searching by brute force, we remark that this task is partially related to Balanced Graph Partitioning (BGP), a family of problems that have been extensively studied [41, 42]. Particularly, maximizing the partitions' internal locality can be conversely interpreted to a minimization on inter-partition connections, _i.e._ edge-cuts. Therefore, as an initialization, we employ BGP solvers and attain a group of \(n\) min-cut data partitions, where \(n\) is the number of fogs (Line 2). These partitions, however, are merely statistically balanced in vertices' number rather than actual GNN workload, which may still induce uneven load distribution (as discussed in SSII-C). To bridge this gap, in the second step, we build a resource-aware mapping between partitions and fogs. Specifically, a bipartite
Fig. 7: Illustration of Fograph’s data placement algorithm, which primarily comprises of a graph partitioning step and a resource-aware matching step.
graph \(\mathcal{B}\) is defined as in Fig. 7 (_Res.-aware mapping_ box), with partitions \(\langle P_{1},P_{2},\cdots,P_{n}\rangle\) and fogs \(\langle f_{1},f_{2},\cdots,f_{n}\rangle\) in separate columns. We associate every partition-fog pair with an edge, weighted by a compostive cost \(\langle P_{k},f_{j}\rangle\) of uploading and executing partition \(P_{k}\) on fog \(f_{j}\):
\[\langle P_{k},f_{j}\rangle=\frac{|P_{k}|_{\mathcal{B}}}{b_{j}}+\omega_{j}(P_{k })+K\delta,\ k,j\in\{1,2,\cdots,n\}. \tag{8}\]
Yet to find a mapping in \(\mathcal{B}\) that satisfies \(\mathcal{P}\)'s min-max objective has a huge decision space of \(n!\), and differs from the traditional bipartite matching problem of maximum weighted sum [43]. However, we observe that it is a variant of _Linear Bottleneck Assignment Problem (LBAP)_[44], and we can apply a threshold-based algorithm to solve an optimal mapping. Specifically, it first instantiates a priority queue \(\mathcal{Q}\) to accommodate all edge weights in \(\mathcal{B}\), and next successively inspects every element in iterations. For each iteration, it dequeues the front element in \(\mathcal{Q}\), the maximum weight in the queue, as the weight threshold \(\tau\), and filters edges in \(\mathcal{B}\) that have a weight smaller than \(\tau\) to construct an auxiliary bipartite graph \(\mathcal{B}^{\prime}\) (Line 8-10). Applying the Hungarian algorithm [45], we obtain a mapping \(M\) in \(\mathcal{B}^{\prime}\) and check whether it is a perfect matching towards the original bipartite graph \(\mathcal{B}\). If succeed, we record the obtained mapping in \(M^{*}\), and move forward to another iteration for new attempts with lower thresholds. Otherwise, there is no perfect matching anymore in filtered bipartite graphs since the remaining to-be-examined thresholds in \(\mathcal{Q}\) will be smaller; the obtained mapping \(M^{*}\) is thus the expected result that minimizes the maximum weight in \(\mathcal{B}\) and the iteration consequently terminates. Finally, the algorithm ends by returning the \(M^{*}\)'s corresponding data placement \(\pi\).
**Discussion.** The expense of IEP's first step mainly relies on the selected BGP solver, and Fograph allows for altering appropriate solvers to adapt to specific graphs and reduce the overhead. The second step takes \(O(n^{2})\) iterations for threshold descending, according to the \(O(n^{2})\) length of the priority queue from the bipartite graph \(\mathcal{B}\) with \(n\) partitions and \(n\) fogs. However, we can use binary search to further expedite the threshold searching, which can significantly decrease overall iterations to \(O(\log n)\). As each iteration's Hungarian algorithm invocation requires \(O(n^{3})\), the second step of IEP takes a total time complexity of \(O(n^{3}\log n)\). We note that such a complexity is affordable since the number of available fog nodes in real deployment is usually small (_e.g._\(<\)100). Besides, the scheduling overhead of IEP data placement is an upfront cost before runtime and it can be amortized across multiple inferences. In our implementation, we apply the widely-adopted _METIS_[46] as the BGP solver and binary search for the mapping step, which spends only seconds for SIoT in total.
To verify the effectiveness of the proposed IEP algorithm, we examine its performance against two comparative strawman approaches: 1) METIS+Random, a trivial version that first invokes METIS for balanced partitions and next assigns them to arbitrary fog nodes, and 2) METIS+Greedy, which takes a greedy heuristic in IEP's partition-fog mapping procedure, _i.e._ iteratively finds fogs for partitions such that their edge weight \(\langle P_{k},f_{j}\rangle\) is minimized. Fig. 8 depicts the results in three heterogeneous environments, where we observe that IEP always surpasses baselines with lower serving latency, demonstrating the superiority of our resource-aware algorithm design. Specifically, the updated IEP algorithm outperforms METIS+Greedy with an average latency reduction of 10.9%, 19.1%, and 19.5% for three different model configurations, respectively.
### _Degree-Aware Compressed Data Collection_
According to IEP's data placement, each fog collects the input data individually in the runtime phase (Fig. 5 ). As discussed in SSII-C, the considerable costs of uploading graph data stress its significance towards real-time serving. To alleviate this bottleneck, Fograph integrates a _communication optimizer_ (CO, Fig. 6 ), operating in two steps.
**Degree-aware quantization** (DAQ). In the first step, we exploit the resilience of GNNs to low-precision representation and lower the data precision in a differentiated way with the topological information. Concretely, we use each vertex's
Fig. 8: Performance comparison of IEP against straw-man approaches in three environments. Given the three types of fogs nodes listed in Table II, their hardware and network conditions can be represented as follows. E1: {1\(\times\), 4\(\times\)_B_, 1\(\times\)_C_, 4G}; E2: {1\(\times\)_A_, 4\(\times\)_B_, 1\(\times\)_C_, 5G}; E3: {1\(\times\)_A_, 2\(\times\)_B_, 1\(\times\)_C_, WiFi}.
degree as a knob to modulate the quantization intensity on its feature vector. The rationale behind is that a vertex with a higher degree assimilates more abundant information from its neighbors, and is more robust to low bitwidths since its quantization errors can be smooth through successive aggregations [47].
In detail, DAQ maintains a triplet \(\langle D_{1},D_{2},D_{3}\rangle\) to divide the vertices' degrees into four intervals \([0,D_{1})\), \([D_{1},D_{2})\), \([D_{2},D_{3})\) and \([D_{3},+\infty)\), and assigns respective quantization bits of \(\langle q_{0},q_{1},q_{2},q_{3}\rangle\) to each. Next, for each vertex, we check its degree to obtain a target bitwidth according to the interval it lies in, and implement a linear quantization. In Fograph, we reckon up four equal-length intervals based on the input graph's degree distribution and instantiate the quantized bits as \(\langle 64,32,16,8\rangle\) by default, as illustrated in Fig. 9. However, it should be noted that \(\langle D_{1},D_{2},D_{3}\rangle\) and \(\langle q_{0},q_{1},q_{2},q_{3}\rangle\) are tunable to accommodate specific graph topology and customized accuracy-latency preference. The exploration of these configurations is left for future work.
**Theorem 2**.: _Given the input feature's bitwidth as \(Q\), DAQ with configurations \(\langle D_{1},D_{2},D_{3}\rangle\) and \(\langle q_{0},q_{1},q_{2},q_{3}\rangle\) renders a compression ratio of \(\frac{1}{Q}[q_{3}-\sum_{i}F_{D}(D_{i})(q_{i}-q_{i-1})],i\in\{1,2,3\}\), where \(F_{D}(\cdot)\) is the cumulative distribution function of the graph degree distribution._
Theorem 2 theoretically gives the compression ratio of DAQ, with proof in Appendix B. By discriminatively reducing the bitwidths of feature vectors, DAQ allows the transferred size to be substantially lower without significant impact on the inference accuracy (\(<\)1% drop in our experiments). Our scheme differs from both 1) uniform quantization [47], which ignores the vertices' different quantization sensitivity and degrades accuracy, and 2) all-layers quantization that demands complicated techniques [48], such as quantization-aware training, to fine-tune the model prediction performance.
**Sparsity elimination.** The second step exploits the observation that feature vectors are amenable to compression. A major fraction of feature vectors are sparse and highly compressible due to their encoding mechanism, and the sparsity is further magnified by precision reduction in the above quantization step. Hence, we compress the sparsity using the _LZ4_ compressor [49] with bit shuffling.
**Deployment of CO.** The lifecycle of CO comprises procedures of packing and unpacking, where the former is deployed at end devices that contribute data sources and the latter is installed on fog nodes. While its cost can be amortized by the communication savings, we develop several optimizations to further reduce the overhead. For packing, we mitigate the burden on end devices by 1) pre-calculating the targeted quantization bitwidth before deployment and using it through the runtime (as long as the graph topology is unchanged), and 2) quantizing the feature vector's elements in parallel for additional acceleration. Since each device uploads its local data individually, the data packing process is naturally parallelized from a global view and the cost is apportioned. For the fog side, the received data are first decompressed and then dequantized back to the original bitwidth before inference. Further, we launch a separated thread for the unpacking procedure to pipeline data recovering and inference execution.
### _Distributed Execution Runtime_
With the generated execution plan, Fograph's runtime _execution engine_ (Fig. 6) orchestrates distributed GNN inference upon multiple nodes. Specifically, when an inference query is launched, fog nodes will collect the necessary data from nearby sensory devices as per the data placement policy, and then collaborate to conduct the GNN execution. For each GNN layer execution, cross-fog data exchanges will be carried out when necessary, _i.e._ neighboring vertices' data belong to different data partitions. Next, inference functions (Aggregate and Update) are invoked by the fog nodes to compute the layer over the data partitions in parallel. Repeating the above process for all layers completes the whole execution and produces expected embeddings. Note that the model has been loaded and stays resident throughout the runtime so that it can be called immediately on demand. To adapt to dynamic fluctuation of resources, _e.g._ background load and bandwidth changes, the metadata server periodically aggregates the metadata from fog nodes, replays IEP to yield an updated data placement, and deploys to fogs at system idle time.
The iterative layer processing is implemented with the Bulk Synchronous Parallel (BSP) model [50], where a synchronization step is triggered when data exchange is needed. Although the total synchronization times depend on the number of GNN layers, which is usually very small (_e.g._ GCN typically stacks two or three layers), we apply the following optimizations for further acceleration. First, the adjacency matrix of each data partition can be constructed prior to the execution as long as the data placement is determined, in order to lower the occupancy of runtime. Second, the synchronization is run as a separated thread to enable the pipelining of data preparation and inference execution. Third, we wrap the execution on top of the mature framework _PyG_[38], which allows to directly benefit from all existing kernel-level optimizations.
### _Adaptive Workload Scheduling_
Even with the offline best graph data placement, the distributed inference performance can reach suboptimum due to the fluctuation of computing resource, _e.g._ caused by machine load variation. This can be relevant considering that fog nodes usually run versatile services simultaneously. To this end, the _adaptive workload scheduler_ (Fig. 6) is developed to refine
Fig. 9: Illustration of Fograph’s degree-aware quantization. Each feature vector is quantized to a targeted bitwidth according to the degree of the vertex that it associates.
the data placement tailored to dynamic load levels. Unlike the offline IEP that makes meticulous yet expensive optimization, the adaptive scheduler works online and should keep agile and agnostic to the inference. Therefore, we employ a lightweight adjustment method and a dual-mode regulation to adapt the workload distribution.
**Load balance indicator.** To reflect how skew the load distribution is, we first define a indicator \(\mu_{j}\) for each node \(f_{j}\) by inspecting the fraction between its actual execution time \(T_{j}^{\text{real}}\) and the mean value of all fogs' measurements \(\frac{1}{n}\sum_{k}T_{k}^{\text{real}}\):
\[\mu_{j}=\frac{T_{j}^{\text{real}}}{\frac{1}{n}\sum_{k}T_{k}^{\text{real}}}, \quad\forall f_{j}\in\mathcal{F}, \tag{9}\]
where \(T_{j}^{\text{real}}\) is the real measured execution time of \(f_{j}\) in the last time interval, and is obtained from the online profiler. We further introduce a slackness factor \(\lambda\) to tune the imbalance tolerance. If there is a node such that \(\mu_{j}>\lambda\), it implies this node breaks the imbalance tolerance \(\lambda\) and is supposed to suffer from a high background load. Note that \(\lambda\) is obliged to be larger than 1. \(\lambda=1\) represents that exact balance is required whereas \(\lambda>1\) relaxes the balanced constraints. Additionally, we count the number of overloaded nodes, denoted as \(n^{+}\), to gauge the global load skewness and use it for the next configuration.
**Diffusion-based adjustment.** The diffusion method aims at amending the graph data placement to align with the load level at a low cost. With the latest profiling data, it first selects two partitioned sets of vertices with the highest and lowest execution time, and then progressively migrates the vertices from the overloaded set to the underloaded set until an estimated local balance is achieved. For each migration, the boundary vertex that shares the most neighbors with the other side is picked. As an example, Fig. 10 illustrates this diffusion process across fog nodes \(f_{1}\) and \(f_{2}\), where they are supposed to be moderate and weak in computing capability, respectively. Without external load burdens, a graph data placement is decided initially, separating four/two vertices as in Fig. 10(a). Assuming a load increase abruptly happens in \(f_{1}\) such that the two nodes' capabilities are currently on a par, the diffusion is then applied to migrate vertices for balancing their workload. The number of to-be-migrated vertices is 1, and the adjustment will choose vertex \(D\) as the moving candidate since it connects the most edge-cuts across subgraph in \(f_{1}\) and \(f_{2}\). This consequently results in an updated balanced layout in Fig. 10(b). In a data placement with multiple partitions, the above pairwise diffusion process continues for all unevenly-loaded nodes until the overall estimated performance satisfies the imbalance tolerance \(\lambda\).
```
Input: Current graph data placement \(\pi\) Load imbalance tolerance \(\lambda\) Skewness threshold \(\theta\) Performance estimation model \(\omega\) Output: Updated graph data placement \(\pi^{\prime}\)
1\(\omega^{\prime}\leftarrow\) UpdateTimings\((\omega,\langle T_{1}^{\text{real}},\cdots,T_{n}^{\text{real}}\rangle)\) \(\langle\mu_{1},\cdots,\mu_{n}\rangle\leftarrow\) CalculateSkew\((\langle T_{1}^{\text{real}},\cdots,T_{n}^{\text{real}}\rangle)\)
2if\(\exists f_{j}\in\mathcal{F},\mu_{j}>\lambda\)then
3\(n^{+}\gets n_{\{\mu_{j}>\lambda\}}\)
4if\(n^{+}/n\leq\theta\)then
5/* - - Lightweight adjustment - - - %
6\(\pi^{\prime}\leftarrow\) DiffAdjust\((\pi,\omega^{\prime},\langle\mu_{1},\cdots,\mu_{n}\rangle)\)
7else
8/* - - Global rescheduling - - - %
9\(\pi^{\prime}\leftarrow\text{IEP}(\mathcal{G},\omega^{\prime})\)
10
11 end if
12else
13\(\pi^{\prime}\leftarrow\pi\)
14
15 end if
16return\(\pi^{\prime}\)
```
**Algorithm 2**Flow of adaptive workload scheduler
We refer to this method as _diffusion_ in that the flow of vertices continuously moves from the overloaded regions to the underloaded regions. This method is lightweight since it only operates on a small part of the graph and migrates a few vertices. Yet it is effective as it consistently makes incremental improvements on the graph layout. However, when the load distribution is dramatically skew, this property also comes at a high cost. Therefore, we introduce a dual-mode scheduler to integrate the lightweight adjustment with the global partitioning.
**Dual-mode scheduler.** The workload scheduler considers dual regulations, the lightweight diffusion-based adjustment discussed above and the heavy global partitioning that invokes the offline IEP. Algorithm 2 presents the scheduler's processing flow. As a first step, the scheduler uses recorded execution time to update the performance estimation models and calculate the skewness indicators (Line 1-2). If there is a node such that \(\mu_{j}>\lambda\), it implies the imbalance tolerance \(\lambda\) is violated. This subsequently triggers adjustments on the existing graph layout. To decide which mode to be applied, we count the number of overloaded nodes, denoted as \(n^{+}\), and compare it with a user-specific skewness threshold \(\theta\) (Line 4-5). \(\theta\) is a positive decimal and is set 0.5 by default. If \(|n^{+}|/n\leq\theta\), it means that the skewness is still tolerable and the lightweight diffusion is applied. Otherwise, the percentage of the overloaded nodes exceeds \(\theta\) and we pass the entire graph \(\mathcal{G}\) and the updated performance estimates \(\omega^{\prime}\) to IEP to yield a new data placement. Note that all the layout modifications are first operated virtually in Algorithm 2 and will be executed physically when the final result is determined and the system is in idle period.
Fig. 10: An instance of diffusion-based graph data placement adjustment, where vertex \(D\) is selected for migration from the overloaded fog node \(f_{1}\) to fog node \(f_{2}\).
## IV Evaluation
This section presents the effectiveness of Fograph in significantly improving the performance of GNN inference serving by examining its core components and comparing with the currently standard cloud implementations and straw-man fog inference systems.
### _Experimental Setup_
**Prototype and methodology.** We implement Fograph prototype with three types of computing nodes and their specifications are listed in Table II. We label them as type \(A\), \(B\), and \(C\), respectively representing fog nodes with weak, moderate, and powerful capabilities. Such a category depends on their processors' power and available memory: Type-_C_ fogs equip the highest computing power with the largest memory space, whereas Type \(A\) is on the opposite side and Type \(B\) is in between them. Although fogs of Type \(A\) and \(B\) own the same processor, the former performs much poorer than the latter with the used SIoT and Yelp datasets, reporting an average of 37.8% longer inference latency with GCN. Fograph is built on top of _PyG_[38], though its design is agnostic to the backend and can be conveniently switched to other engines like _DGL_[51]. We compare Fograph with the _de-facto_ standard cloud serving and a straw-man multi-fog deployment (multi-fog serving without any proposed optimization), all running the same workload. The configurations of cloud and the emulation of distributed graph data uploading follow the methodology in SSII-C. To ensure a fair comparison, the straw-man fog approach adopts the data placement strategy in state-of-the-art distributed GNN processing [39] that first calls _METIS_[46] to partition the data graph and next map them to fog nodes stochastically. According to the placement, the fog approach's runtime directly collects graph data without communication optimization, and launches collaborative inference upon the same distributed framework as Fograph. Without loss of generality, Fograph selects an arbitrary fog node as the metadata server among the used ones. We measure the end-to-end latency from data collection to GNN inference completion. All the background loads are disabled during the runtime and each benchmark is evaluated 50 times to calculate the average results.
**GNN models and datasets.** Four GNN models are employed: GCN [26], GAT [27], GraphSAGE [28] and ASTGCN [29]. The first three are representative GNN models that have been widely used across GNN-based services [52], and their formulas have been listed in Table I. ASTGCN is a spatial-temporal model specific for traffic flow forecasting. All the models are implemented using the instances from _PyG_ model zoo [53] or the original code from the model authors [54], and are trained prior to deployment.
averagely from 4.67\(\times\) to 5.39 \(\times\) on SIoT over cloud when switching WiFi to 4G. Vertically varying the datasets, a larger source graph enhances Foggraph's superiority in saving costs, _e.g._ an average latency reduction of 80.63% and 70.21% over cloud for SIoT (larger) and Yelp (smaller), respectively.
An interesting observation of Fig. 11 is that the serving latency seems to be relatively stable when the dataset and network condition are determined notwithstanding the models. It is interpreted that the communication overhead majorly dominates the total cost in the cases of GCN, GAT and GraphSAGE inferences. Although execution's impact will be promoted if larger GNN models are applied, this fact highlights the ponderance of communication optimization in end deployment, which is exactly what Foggraph emphasizes. In Fig. 12, the superiority of Fograph is more evident in contrast with the baselines, validating the effectiveness of optimizing bandwidth utilization and pipeline execution.
**Accuracy.** While Fograph profitably exploits the compression technique in reducing data collection overhead, its practicability yet relies on inference accuracy. To investigate that, we configure the communication optimizer with default settings and assess the approaches under WiFi connections3. Table IV shows the inference accuracy for SIoT and Yelp upon three models. The cloud and fog approaches maintain the same accuracy as they both reserve full precision features. Fograph drops \(<\)0.1% for both SIoT and Yelp, which will not cause substantial usability issues on the numerical results. The accuracy loss is minimal in SIoT because its features are simply one-hot encoded, where the outcome of quantization and compression is maximized without side effects.
Footnote 3: The inference accuracy is irrespective to the network conditions. The data corruption caused by unstable transmission, which may decrease the accuracy, is not considered in this paper.
### _Case Study: Traffic Flow Forecasting_
This subsection uses a realistic case to complement the performance comparison in demonstrating Fograph's superiority. We emulate a traffic flow forecasting application [29] by running the inferences to predict future flow over the PeMS sensor network in an expected time window. In this setting, four nodes are employed: 1\(\times\)_A_, 2\(\times\)_B_, and 1\(\times\)_C_. The reason behind configuring such a smaller, weaker cluster (minus two Type-_B_ fogs from the testbed in SSIV-B) is to match the computing resources with the much smaller graph dataset targeted in this experiment (compared to SIoT and Yelp). To fit the application, we select ASTGCN, a representative spatial-temporal model with GCN as building blocks.
**IEP result.** Fig. 13(a) visualizes the sensors' spatial distribution of PeMS and its data placement results. Each vertex is colored to indicate its placed fog node. From a global view, the sensors, as the graph vertices, exhibit a clustering pattern of their placement, demonstrating the locality preservation of IEP. Moreover, the different vertices number of each partition implies its heterogeneity awareness. This can be clearer in Fig. 13(b), which counts the number of assigned vertices and the execution latency of each fog node. The execution latency results among fog nodes are well close though they hold entirely different numbers of vertices. In particular, the 4-th fog node (Type _C_) accommodates the most vertices (the yellow points in Fig. 13(a)), but conversely takes the lowest execution time. This is because that it possesses the highest computing capability among others (Type \(A\) and \(B\) fog nodes), and thus Fograph enforces it to afford more workload (vertices). Such load balance attributes to the heterogeneity-aware IEP, which effectively aligns execution workload to the diverse computing resources and ensures maximum resource utilization towards parallel and distributed inference.
**Latency and throughput.** Fig. 13(c) shows the inference latency and throughput of the approaches under 4G, 5G, and WiFi. Analogous to the comparison in Fig. 11, Fograph surpasses the baselines with the lowest costs. Specifically, with varying channels, Fograph consistently attains the lowest serving latency, achieving speedups of up to 2.79\(\times\) and 1.43\(\times\)
Fig. 11: Achieved latency of three GNN models for SIoT and Yelp under varying network conditions.
Fig. 12: Achieved throughput of three GNN models for SIoT and Yelp under varying network conditions.
over cloud and fog, respectively. The latency of traffic flow forecasting appears higher cost than the SIoT and Yelp tasks because it predicts a window of 12 flow possibilities for every 5 minutes in the next hour, which puts a higher workload on execution. Fig. 13(d) presents the corresponding throughput results, where Fograph outperforms all other approaches, showing its superiority.
**Forecasting performance.** Table V records three common evaluated metrics for traffic flow forecasting: Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE). We compare Fograph in two time horizons against cloud, straw-man fog, and an additional compression method that uniformly compresses all feature vectors into 8-bit precision. Similar to the results in Table IV, Fograph induces minimal error expansion of around 0.1 for all metrics compared with the full precision version (cloud and fog). In contrast, uniformly quantizing all features to 8-bit results in an evident error gap to Fograph, which may significantly sacrifice the serviceability of the forecasting results. Such prediction advantages of Fograph origins from our degree-aware quantization choice on exploiting the data sensitivity in GNN inference, enabling both latency reduction and accuracy reservation.
### _Optimization Implication_
This subsection investigates the performance boost of each individual optimization technique introduced in SSIII, using SIoT dataset and the cluster described in SSIV-C.
**Metadata profiler.** We first show the profiler's effectiveness by comparing the estimated execution time predicted by the profiler with the real execution time measured in actual inferences. Fig. 14 depicts the profiling and prediction results for different GNN models and different datasets, collected on a fog node of type \(B\). The solid line indicates an exact equivalence between the measurement and the prediction, while the dashed lines mean a relative difference of \(\pm 10\)% between them. All data points are encompassed by the dashed lines, which implies a small and bounded variance between actual execution latency and estimates, demonstrating the effectiveness of the proxy-guided profiling methodology. Moreover, the figure shows that if a graph has a higher execution latency than the other one, their latency estimates still preserve the ordering. This demonstrates that the profiler's latency prediction is an appropriate tool to guide IEP's data placement optimization.
**Inference execution planner and communication optimizer.** Fig. 15 normalizes the latency of four approaches: fog, Fograph, and its ablated variants without inference execution planner (IEP) or communication optimizer (CO). Specifically, Fograph without IEP replaces its data placement strategies with the one in the straw-man fog approach, and Fograph without CO simply applies direct data transmission between data sources and fog nodes. All other configurations keep unaltered for these two ablated counterparts except for their respectively targeted modules. From a global view, we observe that both modules make sense for performance improvement, while a combination promotes higher speedups. The IEP and CO take similar effects but their focuses are different as indicated by their execution statistics in Fig. 15 (right). The former essentially solves a graph layout to maximize the parallelization and thus saves the overall latency on the execution side given a smaller execution ratio, whereas the latter centers around the data uploading aspect and validly reduce the transmission costs, which lowers the proportion of the communication side. The almost orthogonal optimization directions make their incorporation fully benefit the performance gains.
**Workload scheduler.** To appraise the responsiveness of Fograph in adapting to dynamic load fluctuation, we target production workload traces from Alibaba [57]. The trace contains background CPU load variation running on clusters and we select a snapshot of 1000 timestamps to exert pressures on fog nodes, as shown in Fig. 16 (top). The associated behaviors of Fograph with/without workload scheduler are recorded in Fig. 16 (bottom). Note that the scheduler-less version is a non-dynamic counterpart implemented by deactivating the workload scheduler module in Fograph and maintaining the original IEP data placement all the time. Given the superiority of IEP, it represents a fair performance produced by non-dynamic load-balancing methods like fixed division of computing pressure according to computing capability and
Fig. 13: Case study results, where (a) visualizes the PeMS sensors’ spatial distribution and placement on fog nodes; (b) shows the number of assigned vertices and inference latency measurements of fog nodes; (c) and (d) presents the latency and throughput results, respectively.
communication distance. When the fog nodes share similar load levels (timestamp [0, 150]), the two versions report close inference latency. As node 4's background load increases, the latency of the w/o scheduler version rapidly climbs while the integrated Fograph keeps relatively steady. Particularly, the serving latency can exceed 1s in the extreme situation without scheduler, while Fograph keeps immersing below 0.9s. At the other end, when node 4's load diminishes, its computing capability is released so that Fograph is able to further lessen the costs and achieve up to 18.79% latency reduction over the ablated copy. Overall, we observe that the lack of scheduler leads to a latency trajectory going after the overloaded nodes' trace changes and magnifies load fluctuation in costs. Instead, by employing the workload scheduler, Fograph can adjust to resource dynamics and provides continuously stable serving. Moreover, we observe that Fograph acts agilely to load dynamics for low-latency service provisioning. With a very mild communication delay of \(\sim\)0.2s between the metadata server and fog nodes, our measurements report an average of 4.32s from imbalance detection to load migration.
### _Scalability_
This subsection investigates the scalability of Fograph using much larger synthetic RMAT datasets, with varying fog nodes of type \(B\). Fig. 17 shows Fograph's serving costs over the models and datasets as the number of fogs increases, where we observe the latency effectively shrinks with the augment of computing resources. Employing multiple fogs (\(>\)2) performs much better than merely using one fog for every size of graphs, demonstrating Fograph's efficient resource utilization with parallel and distributed execution. Upon larger graphs (_e.g._ RMAT-100K), Fograph can gain clearly much performance improvement when appending additional fog nodes, suggesting Fograph's capability in handling heavier serving workload. We remark that the serving costs converge for all graphs when there are ample fog nodes, which implies that Fograph can readily afford million-edge graphs with just six moderate fog nodes.
### _GPU Enhancement_
This subsection explores how GPU enhances Fograph towards serving performance. We equip each fog node of type \(B\) with an Nvidia GTX 1050 GPU, and run GCN inference over the synthetic RMAT-100K dataset. Fig. 18 visualizes the achieved latency of the straw-man fog method and Fograph, with and without GPU. For a single fog, both fog and Fograph fail with GPU and encounter the out-of-memory (OOM) error, due to the limited GPU memory. By extending to multi-fog, however, GPU clearly shows its advantage over CPU, and makes progressive improvements as the number of fogs grows, demonstrating Fograph's enhanced performance with hardware accelerators. We also find that serving with a small number of fog nodes (_e.g._ with 2 fog nodes) exhibits a broader performance gap between solutions with and without GPU. This implies GPU is particularly expedient when the available fog resource is inadequate towards the targeted GNN services. Moreover, Fograph without GPU can reach even lower latency compared with the straw-man fog with GPU (with \(>\)3 fog nodes), proving the remarkable performance advance from the proposed set of optimizations.
## V Related Work
As an intersection of GNN processing and fog infrastructure, Fograph provides a group of GNN-generic designs for maximizing the architectural advantages of fog computing. Next we discuss these two lines.
**Accelerating GNN processing.** A growing body of recent worksfrom both the research [58, 59, 60, 61, 62, 63] and industry
Fig. 16: Fograph’s behaviour on real load trace, where the top subfigure depicts the load fluctuation of four fog nodes and the bottom subfigure shows the inference latency variation of Fograph and its counterpart without adaptive workload scheduler.
Fig. 14: Profiling measurements and the prediction results of varying GNN models and datasets. The solid line indicates the estimated execution time exactly equals the actual execution time, whereas the dashed lines draw a relative difference of \(\pm\)10% between them.
Fig. 15: Performance comparison of ablated Fograph counterparts in terms of inference execution planner (IEP) and communication optimizer (CO).
communities [64, 65, 66, 17, 67, 64] have focused on reaching high performance GNN processing by optimizing different execution levels. From a hardware perspective, a number of domain-specific processors [68, 64, 69, 70] are designed by either customizing the microarchitectures to the GNN execution semantics or alternating hardware-software interfaces for graph data reading/writing. The aggregate and update functions are discriminingly optimized according to their access patterns, data reusability, and computation intensities. Fograph is orthogonal to these fundamental efforts and can directly enjoy their pedestal improvements.
From a library perspective, PyG [38] and DGL [51] are representative efforts that provide GNN-tailored API supports atop the neural networks execution engines like PyTorch and TensorFlow. The message-passing model is exploited for unified programmability and the matrix operations are particularly optimized with specialized kernels. Fograph utilizes these libraries as the backend so as to fully benefit from their underlying optimizations.
From a system perspective, GNN execution is treated and optimized as an integrated production process considering its global lifecycle from graph storage [16, 17], data loading [71, 72], memory management [73, 58, 74] to multi-GPU execution [75, 64]. Miscellaneous systems are proposed to address some of the aspects, _e.g._, \(P^{\delta}\)[65] for scalable processing and Dorylus [61] for affordable training. Nonetheless, a majority of the systems focus only on training rather than inference, and all are on single powerful machines or cloud environments, ignoring the inherent data uploading overhead in the serving pipeline. Instead, Fograph capitalizes the more realistic and practical scenarios of GNN serving, emphasizes and implements the unique potential of fog computing to yield high performance.
**Fog-enabled DNN inference.** To achieve efficient intelligent services close to the end, a line of works [76, 77, 78, 79, 80, 81] have explored collaborative execution with the assist of fog computing. By splitting and mapping the input workloads or model parameters, these works parallelize CNN or RNN inferences over a cluster of fog nodes to meet dedicated service-level objectives. A few constraints are usually added in accordance with applications requirements such as execution deadline [81, 82] and energy efficiency [80]. Fograph also lies in this category of work by first bringing GNN workload into the fog-enabled intelligence. Further, Fograph tailors its design to bridge the gap between the unique characteristics of GNN execution and the distributed and heterogeneous environments of fog, which notably improves the overall performance beyond cloud and vanilla fog deployment.
## VI Discussion and Future Work
Fograph is a pilot effort in bringing fog computing to GNN processing systems and yet has certain limitations.
**Exploiting inference context.** Fograph has explored leveraging the unique characteristics of GNN and graph, _e.g._ degree-aware quantization, to boost the serving performance. As a result, Fograph's performance largely relies on the size and complexity of input data volume, where input graph data with larger vector sizes and sparser features can further highlight its superiority over cloud solutions. However, the rich semantics of other inference contexts still call for exploitation including input graph properties, application workflow patterns, and inference functions specialties. Recent works on GNN performance characterizations [83, 84] can provide useful insights to enhance Fograph and guide the parameters tuning and module altering.
**Complex heterogeneity and dynamics.** The heterogeneity and dynamics considered in Fograph are mainly on the execution side, _i.e._ the diverse and fluctuated computing capabilities. The allocated bandwidth is implicitly assumed to be relatively stable for all fog nodes. Although it is realistic in many fog computing cases (_e.g._ in a closed factory or campus [85]), real-world serving could confront more complicated situations where the connections between sensors and fog nodes are varied and even fail [86]. In these cases, we may ignore those few vertices with ultra-high transmission delay during data collection in order to stabilize the overall serving latency. Consequently, the long-tail distribution of data collection time is squeezed and cloud serving could act as an efficient complement to Fograph for more robust serving performance.
**Scalability and fog-cloud collaboration.** Fograph's scheduling employs a centralized metadata server to attain fair resource utilization, which in principle tunes the communication-computation tradeoff for the distributed runtime. While it works well for typical moderate-scale deployment in fog scenarios, it may fall short in scaling up with very huge graphs and massive fog nodes due to the single-point scheduler. To address that, one of the potential solutions is to apply a lightweight, decentralized data placement strategy for inference execution planning such as hashing [65, 71]. Alternatively, we may utilize the abundant cloud resources as a supplement to accomplish scalable GNN processing, _e.g._ by designing a fog-cloud collaboration mechanism that uses
Fig. 17: The serving latency of Fograph with varying size of datasets and varying number of fog nodes.
Fig. 18: The serving latency of varying number of fog nodes with and without GPU. OOM indicates the out-of-memory error.
fog nodes to collect and compress data and cloud servers to accommodate full-batch GNN processing at scale.
**Other optimizing objectives.** The proposed system concentrates on rendering GNN model serving in a real-time manner, whereas end deployment may tackle additional Service-Level Agreements (SLAs) like server costs and memory footprint [82, 87]. The deadline can be integrated as a constraint in the workload scheduler, while the memory issue can be accounted for by redesigning the problem formulation in IEP. Composite SLAs require supplementary scheduling in jointly optimizing the system behaviors with multiple objectives. Our future work intends to enhance Fograph to meet additional types of objectives, _e.g._ energy consumption and privacy preservation.
## VII Conclusion
In this paper, we present Fograph, a distributed GNN inference system that addresses real-time GNN serving with fog computing. Fograph introduces a brand new serving pipeline that allows exploiting the diverse resources from distributed fog servers for resilient service rendering. Through a heterogeneity-aware inference execution planner and adaptive workload scheduler that effectively maps the input graph over multiple fog nodes, Fograph can maximize the parallelization while simultaneously adapting to resource fluctuation. By employing a GNN-specific communication optimizer, Fograph is able to deliver higher performance over the state-of-the-art cloud serving and basic fog deployment, without sacrificing the overall system's accuracy and validity. Since GNNs have been widely adopted in a broad range of IoT and fog scenarios, the proposed system and its workflow can serve as a basis for future analysis and optimization on specific GNN-based services.
|
2303.10256 | PINNSim: A Simulator for Power System Dynamics based on Physics-Informed
Neural Networks | The dynamic behaviour of a power system can be described by a system of
differential-algebraic equations. Time-domain simulations are used to simulate
the evolution of these dynamics. They often require the use of small time step
sizes and therefore become computationally expensive. To accelerate these
simulations, we propose a simulator - PINNSim - that allows to take
significantly larger time steps. It is based on Physics-Informed Neural
Networks (PINNs) for the solution of the dynamics of single components in the
power system. To resolve their interaction we employ a scalable root-finding
algorithm. We demonstrate PINNSim on a 9-bus system and show the increased time
step size compared to a trapezoidal integration rule. We discuss key
characteristics of PINNSim and important steps for developing PINNSim into a
fully fledged simulator. As such, it could offer the opportunity for
significantly increasing time step sizes and thereby accelerating time-domain
simulations. | Jochen Stiasny, Baosen Zhang, Spyros Chatzivasileiadis | 2023-03-17T21:42:58Z | http://arxiv.org/abs/2303.10256v3 | Solving Differential-Algebraic Equations in Power Systems Dynamics with Neural Networks and Spatial Decomposition
###### Abstract
The dynamics of the power system are described by a system of differential-algebraic equations. Time-domain simulations are used to understand the evolution of the system dynamics. These simulations can be computationally expensive due to the stiffness of the system which requires the use of finely discretized time-steps. By increasing the allowable time-step size, we aim to accelerate such simulations. In this paper, we use the observation that even though the individual components are described using both algebraic and differential equations, their coupling only involves algebraic equations. Following this observation, we use Neural Networks (NNs) to approximate the components' state evolution, leading to fast, accurate, and numerically stable approximators, which enable larger time-steps. To account for effects of the network on the components and vice-versa, the NNs take the temporal evolution of the coupling algebraic variables as an input for their prediction. We initially estimate this temporal evolution and then update it in an iterative fashion using the Newton-Raphson algorithm. The involved Jacobian matrix is calculated with Automatic Differentiation and its size depends only on the network size but not on the component dynamics. We demonstrate this NN-based simulator on the IEEE 9-bus test case with 3 generators.
## I Introduction
Solving Differential-Algebraic Equations (DAEs) constitutes one of the indispensable computational tasks for analyzing power system dynamics, e.g., for the analysis of transient stability phenomena. The size and complexity of the power system make solving these DAEs computationally demanding. In addition, the ongoing transition of the power system towards more distributed and heavily inverter-based generation increases this burden even further. Unsurprisingly, the acceleration of DAE simulations has therefore been a research topic throughout many years. The fundamental problem setup as described in [1] has undergone little change in the past decades. The main lines of work, as [2] reports, have been on 1) simplifying models, 2) decomposing the problem (spatially, temporally, and across the numerical schemes; see also [3]) to allow for parallelization, and 3) using pre-computed analytical approximations for faster online evaluation. Additionally, the use of parallel processing offers potential for acceleration [4].
In recent years, using Machine Learning (ML) to solve differential equations has emerged as an active area of research. ML-based methods can be evaluated in real-time and have the capacity to approximate complicated functions, including ones that usually cannot be expressed in closed form. Physics-Informed Neural Networks (PINNs), as reviewed in [5], are used for approximating the solution of partial differential equation. The ideas of domain decomposition are used in [6, 7] and can extend beyond PINNs.
In the area of power system dynamics, works in [8, 9, 10] have used ML-based methods to predict transient behaviors of these systems. They effectively provide approximate DAE-solvers. However, it is unclear whether these approaches are scalable with respect to the system size and different network configurations without becoming prohibitively expensive to train. A potential alternative to address this problem of scalability is the development of hybrid methods, that combine ML-based approaches with conventional numerical DAE-solution schemes to leverage their respective advantages. In [11], the authors train a NN to approximate the unknown dynamics of a system component which can then be integrated into a DAE-solver. But this approach does not fundamentally overcome the scaling challenges in solving DAEs.
In order to enable larger time-step sizes and hence to accelerate the simulation, we focus in this paper on the use of NNs as approximators for the state evolution of single components; we thereby essentially apply a spatial decomposition to the system. In contrast to other explicit approximators for the state evolution of components, NNs are fast, highly accurate, and numerically stable. However, it is nontrivial to model the interactions of these components in the network. The solution approach, we present, is structurally similar the Semi-Analytical Solution (SAS)-based method in [12].
The SAS-based methods make use of time-power series [12] or Adomian Decomposition [13] to approximate the state evolution. An essential part in these approaches is to model the interactions between various components (e.g., how generators interact with each other through power flow along the lines). Since the components are coupled with each other through algebraic equations, one way to "anticipate" their interaction is to predict or estimate the evolution of the algebraic variables as a function of time. For example, finding an approximate power series expansion in the time variable can lead to larger time-steps and therefore accelerate the simulations [2, 12]. Hence the quality of these approximations is of critical importance.
In this paper, we propose a NN-based simulator for power system dynamics. The simulator consists of a NN for each
dynamic system component and a network model that governs their interaction. To model these interactions, the NNs use the temporal evolution of the network algebraic variables as an input for their prediction. As the temporal evolution of the network algebraic variables is initially unknown - when it is known, we "solved" the problem - we first estimate it by means of a power series with respect to time. We then formulate an optimization problem to update this estimate by matching it with the prediction of the NNs.
Section II introduces the problem formulation and Section III describes the proposed solution approach. Section IV introduces the case study and Section V the results. Section VI discusses the results and Section VII concludes.
## II Problem formulation
We consider an autonomous, semi-explicit system of DAEs
\[\frac{d}{dt}\mathbf{x} =\mathbf{f}(\mathbf{x}(t),\mathbf{y}(t),\mathbf{u}) \tag{1a}\] \[\mathbf{0} =\mathbf{g}(\mathbf{x}(t),\mathbf{y}(t)) \tag{1b}\]
with differential variables \(\mathbf{x}(t)\in\mathbb{R}^{m}\), algebraic variables \(\mathbf{y}(t)\in\mathbb{R}^{n}\), control inputs \(\mathbf{u}\in\mathbb{R}^{l}\), and the functions \(\mathbf{f}:\mathbb{R}^{m+n+l+1}\mapsto\mathbb{R}^{m}\) and \(\mathbf{g}:\mathbb{R}^{m+n+1}\mapsto\mathbb{R}^{n}\). We furthermore assume that \(\mathbf{g}\) is continuously differentiable with respect to \(\mathbf{y}\), hence, this system of DAEs is of Hessenberg index-1 form [14]. Given the initial state \(\mathbf{x}_{0}=\mathbf{x}(t_{0})\) of the system, our goal is to evaluate the trajectory of the dynamic variables \(\mathbf{x}(t)\) and of the algebraic variables \(\mathbf{y}(t)\) for the time interval \(t\in[t_{0},t_{0}+\Delta t]\).
The DAE in (1) is nontrivial to solve and there has been a great deal of work dedicated to solving them in many applications [15, 16]. In this paper, we consider solving (1) in the context of power systems, where \(\mathbf{y}(t)\) describes bus terminal voltages and \(\mathbf{x}(t)\) includes variables related to dynamics within the power system components. For example, \(\mathbf{x}(t)\) includes the internal states of the generator (see, [17] and the references within). A particular challenge about the DAEs that describe power systems dynamics is that they can be stiff, and a number of techniques have been developed to address this challenge [1, 18, 19, 20]. These techniques, however, tend to be computationally intensive and are difficult to use in real-time or when a large number of simulations need to be performed in a short time (e.g., for systems with high level of renewable resources).
To overcome these computational challenges, we leverage the underlying sparsity in power systems. Namely, we use the fact that the differential variables associated with each bus, evolve based only on the variables at that bus, except for a coupling constraint on the algebraic variables.1 This coupling constraint is the current balance, derived from Kirchhoff's and Ohm's laws, and constitutes (1b):
Footnote 1: In this paper, we do not consider the very fast electromagnetic transients on the transmission lines.
\[\mathbf{0}=\bar{Y}_{N}\bar{\mathbf{V}}(t)-\bar{\mathbf{I}}(\mathbf{x}(t),\bar{\mathbf{V}}(t)). \tag{2}\]
\(\bar{\mathbf{V}}\in\mathbb{C}^{n}\) contains the complex voltages at \(n\) buses in the system and \(\bar{Y}\in\mathbb{C}^{n\times n}\) defines the complex admittance matrix of the network. The complex current injections from the \(n\) buses, i.e., from components such as synchronous generators, converters, or loads, are collected in the vector \(\bar{\mathbf{I}}\in\mathbb{C}^{n}\). We assume \(m\) components to be connected to the system with the differential states \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) and control inputs \(\mathbf{u}_{i}\in\mathbb{R}^{q}\) for the \(i\)-th component.2 The \(i\)-th component is connected at the \(i\)-th bus and therefore dependent only on the complex voltage \(\bar{V}_{i}\in\mathbb{C}^{1}\). Each component is governed by a set of differential equations \(\mathbf{f}_{i}:\mathbb{R}^{p+2+1+q}\mapsto\mathbb{R}^{p}\) and a function \(h_{i}:\mathbb{R}^{p+2}\mapsto\mathbb{C}^{1}\) that determines the current injection \(\bar{I}_{i}\)
Footnote 2: All the results in the paper hold if \(p\) is different for each bus \(i\), but we avoid this notation to reduce clutter in the variables.
\[\frac{d}{dt}\mathbf{x}_{i} =\mathbf{f}_{i}(\mathbf{x}_{i}(t),\bar{V}_{i}(t),\mathbf{u}_{i}), i=1,\ldots,m \tag{3a}\] \[\bar{I}_{i} =h_{i}(\mathbf{x}_{i}(t),\bar{V}_{i}(t)), i=1,\ldots,m \tag{3b}\]
The concatenation of all differential variables is of dimension \(m\cdot p\), i.e., \(\mathbf{x}\in\mathbb{R}^{m\cdot p}\).
## III Methodology
We approach the solution of (2) and (3) by finding an explicit expression for \(\bar{\mathbf{V}}(t)\) for \(t\in[t_{0},t_{0}+\Delta t]\). Given \(\bar{\mathbf{V}}(t)\), the solution of (3a) can be computed quickly, as we now only need to solve \(m\) much smaller and simpler problems instead of a single large and difficult one. We start by initially estimating the temporal evolution of the algebraic variables \(\hat{\hat{\mathbf{V}}}(t)\) (Section III-A). Using this estimate, we employ a NN for each component \(i\) to predict the trajectory \(\hat{\mathbf{x}}_{i}(t)(t,\mathbf{x}_{0},\hat{\bar{\mathbf{V}}}(t))\) (Section III-B). We then can evaluate (2) to assess the quality of the guess \(\hat{\bar{\mathbf{V}}}(t)\) and update it iteratively based on the mismatch of the currents in (2) (Section III-C). As soon as this update scheme converged, we "solved" the problem for the present time-step and can repeat the procedure for the next time-step. We are thereby able to simulate trajectories of arbitrary length (Section III-D).
This approach allows us, firstly, to incorporate NNs, which are a powerful explicit function approximator, into solution algorithms for DAEs. NNs can approximate \(\hat{\mathbf{x}}(t)\) well for large time-steps of complicated functions, they do not experience numerical instability, and they are fast to evaluate. The second benefit relates to the fact, that the proposed solution algorithm is indifferent to the dimensionality of the differential variable vector, i.e., \(\dim(\mathbf{x})\). This addresses a major challenge for solving DAEs in very large power systems as \(\dim(\mathbf{x})\) rapidly grows when many complicated components are present.
### _Parametrization of the voltage estimate \(\hat{\hat{\mathbf{V}}}(t)\)_
Based on the initial state \(\mathbf{x}_{0}\), we can compute the initial voltages \(\bar{\mathbf{V}}\) that satisfy the current balance. We now take an initial estimate for the evolution of the complex voltage in its polar form at each bus for the time interval \([t_{0},t_{0}+\Delta t]\). We express this estimate as a power-series with respect to
time for the voltage magnitude \(V_{i}\) and the voltage angle \(\theta_{i}\)
\[\bar{V}_{i}(t) =V_{i}(t)e^{j\theta_{i}(t)} \tag{4}\] \[\approx\left(\sum_{k=0}^{r}V_{k,i}(t-t_{0})^{k}\right)e^{j\left( \sum_{k=0}^{r}\theta_{k,i}(t-t_{0})^{k}\right)} \tag{5}\]
up to power \(r\). The coefficients \(V_{0,i},V_{1,i},\ldots,V_{r,i}\) and \(\theta_{0,i},\theta_{1,i},\ldots,\theta_{r,i}\) form the parameters which we will later on update to improve the initial estimate. We subsequently refer to this estimate as \(\hat{V}_{i}(t,\mathbf{\Xi}_{i})\). The vector \(\mathbf{\Xi}_{i}\) collects all parameters at bus \(i\)
\[\mathbf{\Xi}_{i}=\begin{bmatrix}V_{0,i}&\theta_{0,i}&\ldots&V_{r,i}, \theta_{r,i}\end{bmatrix}\in\mathbb{R}^{1\times 2r}. \tag{6}\]
This parametization is repeated for all \(n\) buses in the system. We collect all \(\mathbf{\Xi}_{i}\) in the vector \(\mathbf{\Xi}\in\mathbb{R}^{2rn}\).
### _Explicit approximation of \(\hat{\mathbf{x}}_{i}(t)\) by a Neural Network (NN)_
The exact solution for the evolution of the differential variables \(\mathbf{x}_{i}(t)\) of machine \(i\) can be obtained by integration of (3a)
\[\mathbf{x}_{i}(t)=\mathbf{x}_{0,i}+\int_{t_{0}}^{t}\mathbf{f}_{i}\left(\mathbf{x}_{i}(\tau), \bar{V}_{i}(\tau),\mathbf{u}_{i}\right)d\tau. \tag{7}\]
As there usually exists no explicit analytical solution to (7), we seek to approximate the solution. In a first step, we replace \(\bar{V}_{i}(\tau)\) by the estimate for the bus voltage \(\hat{\bar{V}}_{i}(\tau)\)
\[\mathbf{x}_{i}(t)\approx\mathbf{x}_{0,i}+\int_{t_{0}}^{t}\mathbf{f}_{i}\left( \mathbf{x}_{i}(\tau),\hat{\bar{V}}_{i}(\tau),\mathbf{u}_{i}\right)d\tau. \tag{8}\]
Because an explicit analytical solution is usually still not attainable, we now approximate (8) by a NN to obtain
\[\hat{\mathbf{x}}_{i}^{NN}(t)=NN(t,\mathbf{x}_{0,i},\hat{\bar{V}}_{i}(t), \mathbf{u}_{i}). \tag{9}\]
As we have parameterized the estimate \(\hat{\bar{V}}_{i}(t)\) in terms of the parameters \(\mathbf{\Xi}_{i}\), the input \(\mathbf{z}_{0}\) to the NN becomes
\[\mathbf{z}_{0}=[t,\mathbf{x}_{0,i},\mathbf{\Xi}_{i},\mathbf{u}_{i}],\quad\mathbf{z}_{0}\in\mathbb{ R}^{1+p+2r+q} \tag{10}\]
We use a multi-layer perceptron NN which is a sequence of linear transformations parameterized by weight matrices \(\mathbf{W}_{k}\) and bias vectors \(\mathbf{b}_{k}\) and the application of a non-linear function \(\sigma\) in each of the \(K\) hidden layers
\[\mathbf{z}_{k+1} =\sigma\left(\mathbf{W}_{k+1}\mathbf{z}_{k}+\mathbf{b}_{k+1}\right),\,\forall k =0,1,\ldots,K-1 \tag{11a}\] \[\hat{\mathbf{x}}_{i}^{NN} =\mathbf{x}_{0,i}+(\mathbf{W}_{K}\mathbf{z}_{K}+\mathbf{b}_{K}). \tag{11b}\]
In our simulation, we use \(\sigma(\cdot)=\tanh(\cdot)\) and three layers, i.e., \(K=3\). As a baseline, we consider an explicit Runge-Kutta (RK)-based approximation scheme for (8)
\[\hat{\mathbf{x}}_{i}^{RK}(\Delta t,\mathbf{x}_{0,i},\mathbf{\Xi}_{i})=\mathbf{x}_ {0,i}+\Delta t\sum_{j=1}^{\nu}b_{j}\mathbf{k}^{(j)} \tag{12}\] \[\mathbf{k}^{(j)}=\mathbf{f}\left(t_{0}+c_{j}\Delta t,\mathbf{x}_{0,i}+\Delta t \sum_{l=1}^{\nu}a_{jl}\mathbf{k}^{(l)},\mathbf{\Xi}_{i},\mathbf{u}_{i}\right), \tag{13}\]
where \(a_{jl}\) is strictly lower triangular. Other choices such as time-power series [12] or Adomian decomposition methods [13] are equally possible. In either case the state evaluation is explicitly dependent on \(\mathbf{\Xi}_{i}\) which together with \(t\) accounts for the evolution of the network algebraic variables.
### _Update scheme for voltage parametrization_
Independent of the approximation method used for \(\hat{\mathbf{x}}_{i}\), we can now compute each component's current injection \(\hat{\bar{I}}_{i}(\hat{\mathbf{x}}_{i},\hat{\bar{V}}_{i})\). We then define the current mismatch \(\mathbf{\varepsilon}_{I}^{(k)}\in\mathbb{R}^{2n}\) in iteration \(k\)
\[\mathbf{\varepsilon}_{I}^{(k)}:=\begin{bmatrix}\Re\left(\bar{Y}_{N}\hat{\mathbf{V}}(t, \mathbf{\Xi}^{(k)})-\hat{\bar{\mathbf{I}}}(t,\mathbf{x}_{0},\mathbf{\Xi}^{(k)})\right)\\ \Im\left(\bar{Y}_{N}\hat{\mathbf{V}}(t,\mathbf{\Xi}^{(k)})-\hat{\bar{\mathbf{I}}}(t,\mathbf{x} _{0},\mathbf{\Xi}^{(k)})\right)\end{bmatrix}. \tag{14}\]
We deliberately used the notation \(\hat{\bar{\mathbf{V}}}(t,\mathbf{\Xi}^{(k)})\) and \(\hat{\bar{\mathbf{I}}}(t,\mathbf{x}_{0},\mathbf{\Xi}^{(k)})\) to make clear that \(\mathbf{\varepsilon}_{I}^{(k)}\) is a function dependent only on \(t\), \(\mathbf{x}_{0}\), and \(\mathbf{\Xi}_{i}^{(k)}\).
Figure 1 illustrates the real part of the current injection of the component \(\bar{I}_{1}\) and the respective network current \((\bar{Y}_{N}\hat{\bar{\mathbf{V}}})_{1}\) at bus 1. The current mismatch \(\Re(\mathbf{\varepsilon}_{I,1}^{(k)})\) corresponds to area between the two curves. By updating the parameters \(\mathbf{\Xi}\) we reduce the mismatch as seen in each of the columns in Fig. 1. If \(\mathbf{\varepsilon}_{I}=\mathbf{0}\) across the entire time-step \([t_{0},t_{0}+\Delta t]\) and \(\hat{\mathbf{x}}\) accurately solves (3a), then we found a solution to the system of DAEs. In practice, we evaluate (14) on \(s\) collocation points within the time interval, \(\mathbf{T}=[t_{1},\ldots,t_{s}]\), which we can freely choose. The two columns in Fig. 1 show two possible choices for \(s=2\) and \(s=5\). Formally, the objective in order to solve the system of DAEs becomes
\[\min_{\mathbf{\Xi}}\left\|\begin{bmatrix}\mathbf{\varepsilon}_{I}(t_{1}, \mathbf{x}_{0},\mathbf{\Xi})\\ \vdots\\ \mathbf{\varepsilon}_{I}(t_{s},\mathbf{x}_{0},\mathbf{\Xi})\end{bmatrix}\right\|_{2} \tag{15}\]
and we solve this optimization problem by applying an iterative procedure, summarized by Algorithm 1, to determine \(\mathbf{\Xi}\) based on the Newton-Raphson algorithm. The update value \(\Delta\mathbf{\Xi}^{(k)}\) in Step 8 of Algorithm 1 is determined by the least
Fig. 1: Current with \(s\) collocation points at iteration 0 and 5.
square problem
\[\min_{\Delta\mathbf{\Xi}^{(k)}}\ \left\|A\Delta\mathbf{\Xi}^{(k)}-B\right\|_{2} \tag{16a}\] \[A=\begin{bmatrix}\nicefrac{{\partial\mathbf{\varepsilon}_{I}(t_{1},\mathbf{x }_{0},\mathbf{\Xi}^{(k)})}}{{\partial\mathbf{\Xi}^{(k)}}}\end{bmatrix}B=\begin{bmatrix} \mathbf{\varepsilon}_{I}(t_{1},\mathbf{x}_{0},\mathbf{\Xi}^{(k)})\\ \vdots\\ \mathbf{\varepsilon}_{I}(t_{s},\mathbf{x}_{0},\mathbf{\Xi}^{(k)})\end{bmatrix} \tag{16b}\]
which we solve for repeatedly until we reach the defined tolerance \(\Delta\mathbf{\Xi}^{\max}\) or the maximum number of iterations \(k^{\max}\). The problem in (16) can be under-determined, over-determined, or unique, primarily depending on \(s\) and the number of voltage parameters \(\mathbf{\Xi}\). As we are free to choose the number of collocation points, we impose \(2\cdot n\cdot s\geq|\mathbf{\Xi}|\) which rules out under-determined settings unless the Jacobian matrices are not full rank3.
Footnote 3: The Jacobian of the network current \(\hat{\mathbf{V}}\hat{\mathbf{V}}\) is actually deficient by \(1\) rank due to the symmetry around the voltage angles, however, as long as the Jacobian of the component’s current injection matrix is non-zero, i.e., there is some dependency of the voltage parameters, the problem is well posed.
```
0:\(\mathbf{x}_{0}\), \(T\)
0:\(\mathbf{\Xi}^{(0)}\), \(\Delta\mathbf{\Xi}^{\max}\), \(k^{\max}\)
1:while\(\Delta\mathbf{\Xi}^{(k)}>\Delta\mathbf{\Xi}^{\max}\) and \(k<k^{\max}\)do
2:for component \(i=1,\ldots,m\)do
3: Calculate current injections \(\bar{I}_{i}^{(k)}=h_{i}(\mathbf{T},\mathbf{x}_{0,i},\mathbf{\Xi}_{i}^{(k)})\)
4: Calculate Jacobian \(\nicefrac{{\partial\bar{I}_{i}^{(k)}}}{{\partial\mathbf{\Xi}_{i}^{(k)}}}\)
5:end
6: Calculate network injections \(\bar{Y}_{N}\hat{\mathbf{V}}(\mathbf{T},\mathbf{\Xi}^{(k)})\)
7: Calculate network Jacobian \(\nicefrac{{\partial Y_{N}\hat{\mathbf{V}}(\mathbf{T},\mathbf{\Xi}^{(k)})}}{{\partial\mathbf{ \Xi}^{(k)}}}\)
8: Calculate current balance error \(\varepsilon_{I}^{(k)}\)
9: Calculate voltage parameter update \(\Delta\mathbf{\Xi}^{(k)}\)
10: Update iteration \(\mathbf{\Xi}^{(k+1)}=\mathbf{\Xi}^{(k)}+\Delta\mathbf{\Xi}^{(k)}\), \(k=k+1\)
11:end
12:return\(\mathbf{\Xi}\)
```
**Algorithm 1** Voltage parametrization update
Usually, \(\nicefrac{{\partial\hat{\mathbf{I}}}}{{\partial\mathbf{\Xi}^{(k)}}}\) is not straightforward to derive, however, the use of Automatic Differentiation (AD) allows an efficient computation of the derivatives [21]. AD constructs a computational graph for the calculation of \(\hat{\mathbf{x}}\) and \(\hat{\bar{\mathbf{I}}}\) and then applies the chain rule to obtain the the derivatives \(\nicefrac{{\partial\hat{\mathbf{x}}}}{{\partial\mathbf{\Xi}^{(k)}}}\) and \(\nicefrac{{\partial\hat{\mathbf{I}}}}{{\partial\mathbf{\Xi}^{(k)}}}\). As we restricted \(\hat{\mathbf{x}}(t,\mathbf{x}_{0},\mathbf{\Xi})\) to be an explicit integration scheme for (8), the associated computational graph is automatically constructed and of relatively small size which results in fast calculations. The calculation furthermore does not directly depend on the dimension of \(\mathbf{x}\).4
Footnote 4: \(\dim(\mathbf{x})\) might affect the necessary size of the computational graph.
### _Multi-step simulator for power system dynamics_
The sections above yield a state trajectory for a single time-step and its accuracy is dependent on the approximation quality of \(\hat{\mathbf{x}}(t)\) and \(\hat{\mathbf{V}}(t)\). Requirements on the resulting tolerance therefore limit the suitable time-step size \(\Delta t\). By repeatedly applying the above algorithm, we obtain a multi-step scheme that then allows the simulation of dynamics beyond \(\Delta t\). Algorithm 2 summarizes the simplest form of such a multi-step simulator in which we evaluate the trajectory on the points \(T_{\text{eval}}\). We show in Section V that Algorithm 2 yields accurate trajectories and that the use of NNs allows larger time-steps, hence faster simulations, while being more accurate than the RK-based approximators.
```
0:\(\mathbf{x}_{0}\), \(t_{0}\), \(t_{\max}\), \(\Delta t\)
0:\(\mathbf{x}^{\prime}_{0}=\mathbf{x}_{0}\), \(t^{\prime}=t_{0}\), \(T_{\text{eval}}\)
1:while\(t^{\prime}\leq t_{\max}\)do
2: Define collocation points \(\mathbf{T}(t^{\prime},\Delta t)\)
3: Calculate voltage parametrization \(\mathbf{\Xi}(\mathbf{T},\mathbf{x}^{\prime}_{0})\)
4: Store trajectory \([T_{\text{eval}},\hat{\mathbf{x}}(\mathbf{x}^{\prime}_{0},T_{\text{eval}},\mathbf{\Xi})]\)
5: Update \(\mathbf{x}^{\prime}_{0}\leftarrow\hat{\mathbf{x}}(\mathbf{x}^{\prime}_{0},\Delta t,\mathbf{ \Xi})\) and \(t^{\prime}\gets t^{\prime}+\Delta t\)
6:end
7:return Trajectory
```
**Algorithm 2** Multi-step simulator
## IV Case Study
This section describes the modeling of the power system dynamics and the implementation of the different elements of the simulator.
### _Power system modeling_
As an example for a dynamic component, we consider a two-axis generator model as modeled in [19]
\[\begin{bmatrix}T^{\prime}_{d\sigma}\\ T^{\prime}_{q\sigma}\\ 1\\ 2H\end{bmatrix}\ \frac{d}{dt}\begin{bmatrix}E^{\prime}_{q}\\ E^{\prime}_{d}\\ \delta\\ \Delta\omega\end{bmatrix}=\begin{bmatrix}-E^{\prime}_{q}-(X_{d}-X^{\prime}_{d})I_{d }+E_{fd}\\ -E^{\prime}_{d}+(X_{q}-X^{\prime}_{q})I_{q}\\ \omega_{\sigma}\Delta\omega\\ P_{m}-E^{\prime}_{d}I_{d}-E^{\prime}_{q}I_{q}-(X^{\prime}_{q}-X^{\prime}_{d})I_{ d}I_{q}-D\Delta\omega\end{bmatrix} \tag{17a}\] \[\begin{bmatrix}I_{d}\\ I_{q}\end{bmatrix}=\begin{bmatrix}R_{s}&-X^{\prime}_{q}\\ X^{\prime}_{d}&R_{s}\end{bmatrix}^{-1}\begin{bmatrix}E^{\prime}_{q}-V\sin{(\delta -\theta)}\\ E^{\prime}_{q}-V\cos{(\delta-\theta)}\end{bmatrix}\] (17b) \[\bar{I}=(I_{D}+jI_{Q})=(I_{d}+jI_{q})e^{j(\delta-\pi/2)}. \tag{17c}\]
(17a) corresponds to (3a) and (17b) and (17c) to (3b) For this study, we simplify above model to a classical model by setting the reactances to \(X^{\prime}_{q}=X^{\prime}_{d}\) and \(X_{q}=X^{\prime}_{d}\) and then finding the integral manifold such that the internal voltages \(E^{\prime}_{q}\) and \(E^{\prime}_{d}\) remain constant at \(E^{\prime}_{q0}\) and \(E^{\prime}_{d0}=0\). For more details we refer to [19]. Now, the rotor angle \(\delta\) and the frequency deviation \(\Delta\omega\) form the state \(\mathbf{x}_{i}\), the magnitude \(V\) and angle \(\theta\) of the terminal voltage form \(\mathbf{y}_{i}\), and the mechanical power \(P_{m}\) and the excitation voltage \(E_{fd}\) form the control input \(\mathbf{u}_{i}\). More detailed models could include higher order electro-mechanical modes, governor dynamics for \(P_{m}\) and exciter dynamics \(E_{fd}\). Similarly, inverter-based resources or voltage dependent loads could be included. The results presented in Section V-A are based on the parameters of generator 1 in Table I.
To illustrate the full multi-step simulator, we consider the IEEE 9-bus system described in [19, pp. 164-167] with three generators (all modeled as classical model as above) with parameters from Table I. The initial conditions \(\mathbf{x}_{0}\) and control inputs \(\mathbf{u}\) are determined from assuming an equilibrium state for the used load flow case. To perturb the system, we reduce the mechanical power \(P_{m}\) of generator 2 to 80% of its initial
value and then observe the trajectory for \(2.5\,\mathrm{s}\) as shown in Fig. 3.
### _State predictors_
We test three state predictors, namely a NN with 3 layers and 32 neurons per layer, and two RK-based predictors. They are the Forward-Euler (FE), and 4th-order Runge-Kutta (RK4) schemes. In all cases the complex voltage is approximated by linear functions, i.e., \(r=1\) in (5) resulting in \(|\mathbf{\Xi}|=4\).
### _Implementation and NN training_
A central functionality we use is AD, therefore, the entire simulator is implemented in PyTorch [22]. It furthermore provides the ML framework to train the NN models. For each generator we used 2500 simulated points, that we split into training (64%), validation (16%) and testing (20%) datasets. Each generator surrogate model is trained for a range of possible linear voltage parametrizations, initial conditions, and prediction time-steps, which results in a seven dimensional input space. The training function includes a physics regularization term with evaluates how well the prediction aligns with the governing equations (3a), known as PINNs. We train, using a L-BFGS optimizer, for 2000 epochs in about 10 minutes. The published code base [23] provides a detailed overview of the dataset creation setup and the training procedure. The timing of the simulations is conducted on a AMD Ryzen 7 PRO (1.9 GHz, 8 cores) and 16GB RAM. The numerical simulations of the system of DAEs are implemented using Assimulo [24].
## V Results
We begin by briefly comparing the performance of the three predictors for \(\hat{\mathbf{x}}_{i}\) (FE, RK4, NN) before testing Algorithm 2.
### _Predictor comparison_
The approximator \(\hat{\mathbf{x}}_{i}\) should have two properties: being fast and being accurate for large time-steps \(\Delta t\). Figure 2 shows the two properties. The left panel displays the error in the differential variable \(\Delta\omega\) across 200 points of random initial conditions \(\mathbf{x}_{0}\) and voltage parameterizations \(\mathbf{\Xi}_{i}\). In terms of accuracy, the NN performs well and only for smaller time-steps (\(\Delta t<0.05\mathrm{s}\)), the RK4 approximation becomes more accurate. The RK-schemes exhibit the expected dependency of the time-step - the local truncation error should follow \(\mathcal{O}(\Delta t)\) and \(\mathcal{O}(\Delta t^{4})\) for FE and RK4. The error of the NN in contrast is near independent of \(\Delta t\). At the same time, the NN is the fastest approximator. While the FE approximator is similarly fast, its poor accuracy makes it undesirable and while more accurate, the RK4 scheme has the drawback of comparably long run-time.
### _Simulator results_
We now focus on the performance of the simulator in Algorithm 2. Figure 3 displays the prediction for \(\Delta t=0.05\,\mathrm{s}\) using NN-based approximators \(\hat{\mathbf{x}}_{i}^{NN}\).
The results correspond to the first row in Table II where we report the maximum absolute errors of \(V\) and \(\Delta\omega\) along the trajectory and the run-time. Table II shows further results for the RK4-based simulator, for time-steps of \(\Delta t=0.1\,\mathrm{s}\) and \(\Delta t=0.15\,\mathrm{s}\) and for collocation points at \(\mathbf{T}=[0.3,0.7]\Delta t\) and \(\mathbf{T}=[0.1,0.3,0.5,0.7,0.9]\Delta t\), i.e., \(s=2\) and \(s=5\). The NN-based simulator is consistently faster and more accurate than the RK-based simulator, except for the case of \(\Delta t=0.05\,\mathrm{s}\). These results confirm the predictor characteristics observed in Section V-A. The simulation run-time scales approximately inversely proportional with \(\Delta t\). Deviations can arise due to varying numbers of iteration in Algorithm 1, however, in the reported cases, we observe usually 5-7 iterations. Increasing
Fig. 3: Prediction of a state trajectory (\(\Delta\omega_{i}\)) and an algebraic variable \(V_{i}\) for time-step size \(\Delta t=0.05\,\mathrm{s}\) with a NN-based simulator. The predictions (dashed lines) coincide with the ground truth (gray lines).
Fig. 2: Predictor characteristics: (left) prediction error of \(\Delta\omega\) for 200 predictions with random \(\mathbf{x}_{0}\) and \(\mathbf{\Xi}\), (right) run-time per point. These results show that NN constitute an accurate and fast predictor.
the number of collocation points \(s\) results in a small increase in run-time in all cases. In terms of accuracy, we observe that more collocation points can lead to better accuracy, when the overall solution quality is good. However, for too large time-steps, here \(\Delta t=0.15\,\mathrm{s}\), the effect might reverse. The choice of the location of the collocation points, i.e., \(\mathbf{T}\), also matters.
## VI Discussion
The above results illustrate the conceptual benefits of a simulator leveraging NN-based predictors for the dynamic components of a power system. The upside of the approach should become more clear when integrating higher order models that would usually require very small time-steps for accurate resolution. A second direction of improvements lies in the implementation. All dynamic elements should be pre-compiled and distributed on the available cores to speed up the calculation of the residual and the Jacobian of the residual as this is the area where most computational time is spend. Furthermore, borrowing from classical numerical methods, dishonest Newton-Raphson-schemes and the exploitation of the structure in the Jacobians should be improved. The latter is due to the sparsity in the network equations and the dependence of the components current injection only on its terminal voltage. Another important aspect is to investigate the relation between the number of collocation points, the resulting current balance errors and overall errors. Analytical analysis of the scheme will be necessary to build confidence in the accuracy but also to identify limitations of the approach. Ultimately, larger test cases need to be studied.
## VII Conclusion
This work presented a simulation approach that centers around finding iteratively an approximation of the evolution of the algebraic variables in the power system DAEs. The approximation of the dynamic state evolutions by NNs, instead of classical explicit numerical integration schemes, allows larger time-steps to be realized while being fast to execute. This work aimed at providing a proof of concept, it is foreseeable that future work on this method shares many typical questions with established DAE solvers, hence, by applying various existing techniques the computational performance and scalability of the approach should improve significantly.
|
2310.07612 | PHYDI: Initializing Parameterized Hypercomplex Neural Networks as
Identity Functions | Neural models based on hypercomplex algebra systems are growing and
prolificating for a plethora of applications, ranging from computer vision to
natural language processing. Hand in hand with their adoption, parameterized
hypercomplex neural networks (PHNNs) are growing in size and no techniques have
been adopted so far to control their convergence at a large scale. In this
paper, we study PHNNs convergence and propose parameterized hypercomplex
identity initialization (PHYDI), a method to improve their convergence at
different scales, leading to more robust performance when the number of layers
scales up, while also reaching the same performance with fewer iterations. We
show the effectiveness of this approach in different benchmarks and with common
PHNNs with ResNets- and Transformer-based architecture. The code is available
at https://github.com/ispamm/PHYDI. | Matteo Mancanelli, Eleonora Grassucci, Aurelio Uncini, Danilo Comminiello | 2023-10-11T15:56:55Z | http://arxiv.org/abs/2310.07612v1 | # PHYDI: Initializing Parameterized Hypercomplex Neural Networks as Identity Functions
###### Abstract
Neural models based on hypercomplex algebra systems are growing and proliferating for a plethora of applications, ranging from computer vision to natural language processing. Hand in hand with their adoption, parameterized hypercomplex neural networks (PHNNs) are growing in size and no techniques have been adopted so far to control their convergence at a large scale. In this paper, we study PHNNs convergence and propose parameterized hypercomplex identity initialization (PHYDI), a method to improve their convergence at different scales, leading to more robust performance when the number of layers scales up, while also reaching the same performance with fewer iterations. We show the effectiveness of this approach in different benchmarks and with common PHNNs with ResNets- and Transformer-based architecture. The code is available at [https://github.com/ispamm/PHYDI](https://github.com/ispamm/PHYDI).
Mateo Mancanelli, Eleonora Grassucci, Aurelio Uncini, and Danilo Comminiello Dept. of Information Engineering, Electronics, and Telecomm., Sapienza University of Rome, Italy Hypercomplex neural networks, identity initialization, residual connections, hypercomplex algebra, neural networks convergence +
Footnote †: Corresponding author’s email: [email protected]. We acknowledge financial support from PNRR MUR project PE0000013-FAIR.
## 1 Introduction
Although the largest part of deep learning models is defined following the rules of real-valued numbers and algebra \(\mathbb{R}\), such a choice is not always the best one for multidimensional data. Therefore, an increasing number of works are exploring the possibility of defining models with different underlying algebras that better fit the problem of study. Among these, Clifford, Cayley-Dickson, and hypercomplex algebras have been widely adopted. More recently, Parameterized hypercomplex neural networks (PHNNs) have transformed the wide research field of neural models defined over such hypercomplex algebras. This happened thanks to their flexibility and malleability that allow users to make use of hypercomplex-based networks without developing ad-hoc architectures or seeking the proper algebra domain that best fits the specific task. Indeed, PHNNs grasp algebra rules directly from data and therefore they can be employed for any \(n\)-dimensional data without modifications to architectural layers. For that reason, the already ample field of hypercomplex models based on complex [1], quaternion [2], dual quaternion [3, 4], and octonion [1] numbers has been permeated by PHNNs. These networks have been defined with different known backbones such as ResNets [5, 6], GANs [7, 8], graph neural networks [9], and Transformers [10], among others [11, 12]. So as with any neural model, their expressiveness increases with deeper representations, also due to the fact that for high values of the hyperparameter \(n\), which also affects the number of parameters reducing them to \(1/n\), PHNNs may be defined with very few parameters even if the number of layers is large.
However, any convergence study or regularization and normalization strategies have been proposed for improving PHNNs training stability and convergence when the number of layers increases. Indeed, PHNNs behavior with a large number of layers is still unknown, as well as it is not clear how the parameters reduction driven by the hyperparameter \(n\) affects the learning of very deep networks and whether intra-layer parameters and overall parameters are balanced
Figure 1: The proposed PHYDI initialization speeds up the convergence of parameterized hypercomplex neural networks in which it is employed.
during training.
In this paper, therefore, we first conduct a study on PHNNs convergence in large-scale training, discovering that very deep architectures have convergence issues, and founding that the hyperparameter \(n\) is related to the convergence. In order to address these issues, we propose parameterized hypercomplex identity initialization (PHYDI), a method to help PHNNs converge fast and improve large-scale model learning motivated by the dynamical isometry that has been proved a clear indicator of trainability [13]. We propose to initialize each parameterized hypercomplex multiplication (PHM) or convolution (PHC) layer, that represent the core of PHNNs, as an identity function. The proposed initialization is carried out by adding a residual connection and a trainable zero-initialized parameter that multiplies the actual layer parameters, as introduced for real-valued models [14].
We prove that PHYDI improves very deep ResNets- and Transformer-based PHNNs convergence in different benchmarks even when standard PHNNs diverge, therefore improving the learning ability of large architectures. Furthermore, the proposed method leads to faster convergence of each PHNN we test, both in image and language datasets.
In summary, our contributions are: i) We conduct, to the best of our knowledge, the first study on the convergence of PHNNs in large-scale training, showing that this is also related to the key hyperparameter \(n\). ii) We propose PHYDI, a method to avoid divergence of very deep PHNNs, which also fastens the convergence of convolutional- and attention-based PHNNs in multiple benchmarks, allowing the learning of large-scale networks with fewer iterations.
The rest of the paper is organized as follows. The background on PHNNs is developed in Section 2, the proposed method is presented in Section 3, while the experimental evaluation is performed in Section 4. Finally, conclusions are drawn in Section 5.
## 2 Parameterized Hypercomplex Neural Networks
Although the largest part of neural models is defined over the set of real numbers \(\mathbb{R}\), this configuration does not always perfectly fit every task. In the case of multidimensional data, such as objects in the 3D space, multichannel audio signals, or color images, real-valued models break the multidimensional nature of the input and their learning may fail. For this reason, recently, several works are considering other numerical domains with stronger algebraic and geometry properties that better model the multidimensional structure of the data. Among these domains, the complex (one imaginary unit) and the quaternion (three imaginary units) ones have been the most cited due to their 2D and 4D nature and their algebraic properties that can be leveraged during the learning phase. Indeed, the so-called Hamilton products of quaternions allow a parameter reduction up to the \(75\%\) and, most importantly, the parameters sharing within the layer. This feature enables the model equipped with such algebra to preserve local relations and correlations among the input dimensions, building a more accurate view of the multidimensional data. However, such models are limited to domain dimensionality and therefore arduous to apply when data does not fit the domain-specific dimensions, such as 2 for the complex domain or 4 for the quaternion one. For this reason, parameterized hypercomplex neural networks (PHNNs) have been introduced in the literature [5]. The family of PHNNs comprises neural models defined over various domains by means of hypercomplex layers parameterized by the user. The core operation of these networks is the parameterized hypercomplex (PH) layer that builds the weight matrix as a sum of Kronecker products as follows:
\[\mathbf{H}=\sum_{i=1}^{n}\mathbf{A}_{i}\otimes\mathbf{F}_{i}, \tag{1}\]
where the hyperparameter \(n\) defines the domain in which the model operates (e.g., \(n=2\) complex domain, \(n=4\) quaternion domain, and so on), and, most importantly, it can be set to any integer value even though the algebra regulations are not known. That is possible thanks to the matrices \(\mathbf{A}_{i}\) that learn the algebra rules directly from data, and the matrices \(\mathbf{F}_{i}\) representing the parameters (or filters for convolutional layers). Thanks to their flexibility, PH layers can then be involved in several common neural architectures such as ResNets [5], transformers [10], and graph neural networks [9], just as a replacement for standard real-valued layers. Although PHNNs show outstanding results in several tasks, such models are tremendously new in the literature and their convergence behavior has not been studied yet.
## 3 PHYDI: Parameterized Hypercomplex Identity Initialization
The proposed parameterized hypercomplex identity initialization (PHYDI) for PHNNs is easy to be involved in any pre-existing PHNN as it consists of a slight modification to the
Figure 2: A simple PHM formulation for the proposed method with \(n=2\). The PHM layer builds the weights matrix \(\mathbf{H}\) as a sum of \(2\) Kronecker products that are multiplied by the PHYDI learnable parameter \(\alpha\), initialized to \(0\) so as to ensure the identity function brought by the inserted residual connection. Note that the matrices \(\mathbf{A}_{1}\) and \(\mathbf{A}_{2}\) are learnable during training, exactly as the classical weights matrix \(\mathbf{F}_{1}\) and \(\mathbf{F}_{2}\).
hypercomplex architecture to perform the identity operation. Through identity, we can ensure initial dynamical isometry that has been proven to be a clear aspect of well-trainable networks [13]. In practice, to simplify the gradient propagation at the initialization, the signal should not propagate on the PH layer \(\mathcal{F}\{\mathbf{H}\}\), but rather on its residual connection \(\mathbf{x}\). To do that, a parameter \(\alpha\) is set to multiply the PH layer and initialized to \(0\), so that only the residual connection remains active during the first iteration. More formally, the \(j+1\)-th PH layer with PHYDI formulation becomes:
\[\mathbf{x}_{j+1}=\mathbf{x}+\alpha_{j}\mathcal{F}\{\mathbf{H}\}(\mathbf{x}), \tag{2}\]
whereby \(\alpha\) is a learnable parameter initialized to \(0\) at the beginning of the training. A visual representation is shown in Fig. 2, where the Kronecker product blocks that build the PH layer are then summed and multiplied by the PHYDI learnable parameter \(\alpha\). In the next subsections, we formally define two of the most common vision and language PH architectures with the proposed PHYDI method, which are PHResNets and PHTransformers.
### Initialization of Hypercomplex ResNets
PHResNets have already been defined with residual connections, so no architectural changes are needed except for the insertion of the PHYDI parameter \(\alpha\) that initializes the network to perform the identity operation. Therefore, the PHResNet layer will be defined exactly as (2), where instead of generic layers, the function \(\mathcal{F}(\mathbf{x},\{\mathbf{H}_{j}\})\) comprises parameterized hypercomplex convolutional (PHC) layers:
\[\mathcal{F}(\mathbf{x},\{\mathbf{H}_{j}\})=\text{PHC}\left(\text{ReLU}\left( \text{PHC}(\mathbf{x})\right)\right). \tag{3}\]
Although this formulation is already based on residual connections, we will demonstrate that the identity initialization due to the \(\alpha\) parameter gives PHResNets a faster convergence, especially in very deep networks.
### Initialization of Hypercomplex Transformers
PHTransformers also have a residual connection structure inside their layers. However, its composition is more complex than PHResNets one and a more detailed study has to be performed for the identity initialization. Indeed, a standard PHTransformer layer can be described by
\[\begin{split}\mathbf{x}_{j+1}=&\text{LayerNorm}\{ \mathbf{x}_{j}+\text{PHM}(\\ &\text{LayerNorm}(\mathbf{x}_{j}+\text{PHAtt}(\mathbf{x}_{j}))\},\end{split} \tag{4}\]
that is, a post-normalization of the sub-layer modules (Parameterized hypercomplex multi-head attention (PHAtt) and Parameterized hypercomplex multiplication (PHM)). Following the literature suggestion [14], in order to initialize the layer as the identity function, we can remove the layer normalization and insert the PHYDI parameters as multipliers for the sub-layers as follows:
\[\mathbf{x}_{j+1}=\mathbf{x}_{j}+\alpha_{j}\text{PHM}(\mathbf{x}_{j}+\alpha_{ j}\text{Att}(\mathbf{x}_{j})). \tag{5}\]
Note that the learnable parameter \(\alpha_{j}\) is the same within the PHTransformer layer and it is initialized to \(\alpha_{j}=0\) to ensure the identity operation. Figure 3 shows the three different PHTransformer configurations we test in our experiments, namely PostNorm [15], PreNorm [16], and PHYDI (proposed).
## 4 Experiments
In this Section, we present the experimental validation of our theoretical claims. We conduct experiments on two common tasks such as image classification and sequence-to-sequence prediction. Therefore, we involve different ResNets backbones (ResNet18, ResNet50, and ResNet152) for the first task, and a Transformer-based model varying the number of layers for the second task.
### Comparison methods
For convolutional networks, no previous initialization or fast-convergence method have been proposed for PHHNs, so we compare our proposal with standard PHResNets with classical initialization [5]. Additionally, we experiment with
Figure 3: The proposed PHYDI formulation for PHTransformer layers compared with common PostNorm [15] and PreNorm [16] architectures.
Fixup initialization [17], but we note that it required a senseless number of epochs for reaching the \(80\%\) of accuracy with PHResNets18 and almost damaged the final accuracy performances for deeper PHResNets. Moreover, identity initialization methods have been already proven to be better than Fixup for real-valued models [14]. For these reasons, we do not report Fixup scores in Table 1. We further validate PHYDI against a weighted non-identity initialization of the Kronecker products. We insert a learnable weight \(\alpha\) initialized to \(1\) in each PH layer product in order to let the network learn the proper weighting mechanism for each Kronecker multiplication, and therefore, for each input dimension. However, the weighted Kronecker product (WKP) does not ensure identity initialization.
Regarding transformers, we compare PHTransformers with different layer normalization positions, such as the original real-valued Transformer PostNorm [15], the newer PreNorm [16], and the proposed PHYDI initialization without normalizations.
### Results on faster convergence
Table 1 shows the image classification results for different PHResNets with and without the proposed PHYDI method. We conduct experiments with three different ResNets backbones with an increasing number of layers (18, 50, 152) and for different configurations of the hyperparameter \(n\). We train all the models following the recipe of the original real-valued ResNet paper [18] that the PHResNet paper has proved to work well also in hypercomplex domains [5]. We just edit the batch size, using a value of \(64\). We experiment on the image classification task with the CIFAR10 dataset in order to test the method on a standard benchmark. We evaluate the performances of our initialization method according to two metrics, namely the number of epochs the models require to reach the \(80\%\) of accuracy (M1), and the number of epochs to beat a model with the proposed approach (M2). PHYDI ensures faster convergence in every test we conduct, lowering the number of epochs to obtain the \(80\%\) of accuracy. Furthermore, the advantages of our approach are shared among the most common choices of the hyperparameters \(n\), proving that PHYDI PHResNets are robust across architectural changes. Moreover, the effect of the proposed method is more evident with the increase of model layers. Indeed, while PHResNets152 suffers from slow convergence, endowing such networks with PHYDI initialization drastically fastens the convergence, lowering the number of epochs by more than \(20\) in some cases. This is clear evidence of the incoming results regarding PHYDI ability in allowing improved learning of large-scale models.
### Results on large-scale models
The signal and gradient propagation in real- as well as in hypercomplex-valued neural networks becomes much more difficult as the model depth increases. Therefore, proper forward and backward flows are crucial for the effective learning of a neural model. In this subsection, we focus our experiment on the sequence-to-sequence PHTransformers [10] on the WikiText2 dataset. As in the previous tests, we experiment PHNNs with different values of the hyperparameter \(n=2,3,4\). We train all models for \(50\) epochs with a batch size of \(64\) and optimize them with Adagrad and a step learning rate scheduler that progressively reduces it from its original value of \(0.01\). We experiment with different encoder depths and a number of layers in the set \([12,16,24,32,48,64,96]\). Similar to previous experiments, we set up a metric to prove the fast convergence: we consider the number of epochs to reach a value of perplexity equal to \(200\).
Figure 4 shows the results of our experiments for different \(n\) settings. The first consideration we can draw is that the original Transformer architecture defined in hypercomplex do
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & \(n\) & M1\(\downarrow\) & M2 \\ \hline PHResNet18 & 2 & 6.00 \(\pm\) 0.58 & 2 \\ + WKP & 2 & **5.75**\(\pm\) 0.25 & 2 \\ + PHYDI & 2 & 6.00 \(\pm\) 0.00 & - \\ PHResNet50 & 2 & 10.67 \(\pm\) 1.20 & 3 \\ + WKP & 2 & 10.67 \(\pm\) 0.67 & 2 \\ + PHYDI & 2 & **7.00**\(\pm\) 0.58 & - \\ PHResNet152 & 2 & 32.67 \(\pm\) 2.03 & 4 \\ + WKP & 2 & 29.80 \(\pm\) 3.68 & 4 \\ + PHYDI & 2 & **6.33**\(\pm\) 1.33 & - \\ \hline PHResNet18 & 3 & 6.33 \(\pm\) 0.33 & 2 \\ + WKP & 3 & 6.00 \(\pm\) 0.00 & 1 \\ + PHYDI & 3 & **5.00**\(\pm\) 0.58 & - \\ PHResNet50 & 3 & 8.67 \(\pm\) 1.20 & 2 \\ + WKP & 3 & 9.0 \(\pm\) 0.58 & 2 \\ + PHYDI & 3 & **6.33**\(\pm\) 0.67 & - \\ PHResNet152 & 3 & 26.67 \(\pm\) 1.76 & 4 \\ + WKP & 3 & 20.00 \(\pm\) 2.12 & 4 \\ + PHYDI & 3 & **4.67**\(\pm\) 0.33 & - \\ \hline PHResNet18 & 4 & **3.75**\(\pm\) 0.48 & 1 \\ + WKP & 4 & 5.67 \(\pm\) 0.67 & 2 \\ + PHYDI & 4 & 4.50 \(\pm\) 0.50 & - \\ PHResNet50 & 4 & 8.33 \(\pm\) 0.88 & 2 \\ + WKP & 4 & 9.33 \(\pm\) 0.88 & 2 \\ + PHYDI & 4 & **7.00**\(\pm\) 1.15 & - \\ PHResNet152 & 4 & 22.67 \(\pm\) 3.71 & 3 \\ + WKP & 4 & 20.60 \(\pm\) 1.60 & 3 \\ + PHYDI & 4 & **5.33**\(\pm\) 0.33 & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: PHResNets with standard, WKP, and PHYDI initialization for different values of the hyperparameter \(n\) in the CIFAR10 dataset. Metrics M1: Epochs to \(80\%\) Acc, M2: # Epochs to beat one w/ PHYDI. The uncertainties correspond to standard error.
mains with PostNorm diverges as the number of encoder layers increases. This result extends real-valued results already proven in [14]. However, this configuration seems linked to the number of parameters of the model and, consequently, to the choice of the hyperparameter \(n\) in PHNNs. Indeed, it is interesting to note that while for \(n=2\) the PostNorm configuration diverges with just \(48\) layers, acceptable results are instead obtained up to \(64\) layers when setting \(n=4\). Therefore, the gradient is better propagated with fewer parameters and model convergence does not depend only on the depth of the model itself.
The PreNorm method performs better than the original PostNorm, providing stabler performances even when the depth increases. Additionally, this method performs best for small encoders with \(12\) or \(16\) layers, exceeding also the proposed approach. However, PHYDI clearly improves PH-Transformer convergence and robustness when the architecture becomes very deep, especially from \(32\) layers up to the maximum of our experiments. The PHTransformer equipped with PHYDI initialization still requires just \(2\) epochs to reach a perplexity value of \(200\) even with \(96\) encoder layers, proving that the proposed approach improves the learning of large-scale PHNNs. Overall, it is important to note that PHYDI improves large models stability and it also provides robust results also for unfavorable configurations, e.g. small networks.
### Robustness to hyperparameters settings
We perform additional experiments with different hyperparameters settings further to validate PHYDI and its robustness to various configurations. In detail, we conduct tests varying the learning rate, considering both \(0.01\) and \(0.1\), and changing the batch size (from \(64\) to \(128\)), since this usually affects convergence speed. Table 2 shows the results of the experiments with PHResNets18 and PHResNets50 with the same evaluation metrics as Table 1. It is evident that PHYDI improves results under different settings too, being robust to various hyperparameters configurations.
## 5 Conclusion
In this paper, we study the convergence of common PHNNs, proposing parameterized hypercomplex identity initialization (PHYDI), a method that, with very few architectural changes, improves convergence in speed and learning depth. We experiment with PH convolutional and transformer models on known benchmarks and we prove that PHNNs endowed with PHYDI gain convergence speed and a better gradient propagation when the number of layers increases.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & \(n\) & btc-sz & lr & M1\(\downarrow\) & M2 \\ \hline PHResNet18 & 2 & 128 & 0.1 & 21.86 \(\pm\) 3.14 & 3 \\ + WKP & 2 & 128 & 0.1 & 17.17 \(\pm\) 3.35 & 3 \\ + PHYDI & 2 & 128 & 0.1 & **10.00**\(\pm\) 1.22 & - \\ \hline PHResNet50 & 2 & 128 & 0.01 & 24.33 \(\pm\) 2.19 & 3 \\ + WKP & 2 & 128 & 0.01 & 12.67 \(\pm\) 1.67 & 2 \\ + PHYDI & 2 & 128 & 0.01 & **8.67**\(\pm\) 0.88 & - \\ \hline PHResNet18 & 3 & 128 & 0.1 & 20.50 \(\pm\) 5.08 & 3 \\ + WKP & 3 & 128 & 0.1 & 12.83 \(\pm\) 1.68 & 3 \\ + PHYDI & 3 & 128 & 0.1 & **8.00**\(\pm\) 0.58 & - \\ \hline PHResNet50 & 3 & 128 & 0.01 & 11.25 \(\pm\) 2.56 & 2 \\ + WKP & 3 & 128 & 0.01 & 12.80 \(\pm\) 3.89 & 2 \\ + PHYDI & 3 & 128 & 0.01 & **10.75**\(\pm\) 2.25 & - \\ \hline PHResNet18 & 4 & 128 & 0.1 & 21.40 \(\pm\) 3.22 & 3 \\ + WKP & 4 & 128 & 0.1 & 20.67 \(\pm\) 2.78 & 2 \\ + PHYDI & 4 & 128 & 0.1 & **8.80**\(\pm\) 1.24 & - \\ \hline PHResNet50 & 4 & 128 & 0.01 & 22.33 \(\pm\) 8.67 & 2 \\ + WKP & 4 & 128 & 0.01 & 11.00 \(\pm\) 0.58 & 2 \\ + PHYDI & 4 & 128 & 0.01 & **10.00**\(\pm\) 1.00 & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: PHResNets results with different initialization methods and learning rate configurations. The batch size is different from previous experiments. Results show that PHYIDI is robust across various settings, improving convergence speed.
Figure 4: Number of epochs to reach a perplexity value \(\geq 200\) in the WikiText2 dataset for PH Transformers with increasing depth of the encoder model from \(12\) to \(96\). The three plots refer to different values of the hyperparameter \(n=2,3,4\). Vertical standard error bars are shown too, computed over multiple runs. The PH Transformer equipped with PHYDI preserves the performance even in large-scale models while standard and PostNorm models diverge. |
2301.05744 | Adaptive Neural Networks Using Residual Fitting | Current methods for estimating the required neural-network size for a given
problem class have focused on methods that can be computationally intensive,
such as neural-architecture search and pruning. In contrast, methods that add
capacity to neural networks as needed may provide similar results to
architecture search and pruning, but do not require as much computation to find
an appropriate network size. Here, we present a network-growth method that
searches for explainable error in the network's residuals and grows the network
if sufficient error is detected. We demonstrate this method using examples from
classification, imitation learning, and reinforcement learning. Within these
tasks, the growing network can often achieve better performance than small
networks that do not grow, and similar performance to networks that begin much
larger. | Noah Ford, John Winder, Josh McClellan | 2023-01-13T19:52:30Z | http://arxiv.org/abs/2301.05744v1 | # Adaptive Neural Networks using Residual Fitting
## 1 Abstract
Current methods for estimating the required neural-network size for a given problem class have focused on methods that can be computationally intensive, such as neural-architecture search and pruning. In contrast, methods that add capacity to neural networks as needed may provide similar results to architecture search and pruning, but do not require as much computation to find an appropriate network size. Here, we present a network-growth method that searches for explainable error in the network's residuals and grows the network if sufficient error is detected. We demonstrate this method using examples from classification, imitation learning, and reinforcement learning. Within these tasks, the growing network can often achieve better performance than small networks that do not grow, and similar performance to networks that begin much larger.
## 2 Introduction
Prior to training a neural network on a set of data, it is difficult to estimate the size or shape of the network required to fit the complexity of the data. The network's ability to fit a data set is also dependent on the network's initialization, which further complicates the search for the "right" network size. For most problems, a smaller network than typically used would suffice if we only knew how to initialize the weights [1]. Since we have little a priori knowledge of the structure of the best-performing networks, we can either train a network that is much larger than needed, or we can search for an appropriate smaller network using techniques such as neural architecture search [2], network pruning [3], and network growing. In this paper, we present and test a new technique for networking growing that trains a small, neural network on the residuals of a larger network. This method can be used in addition to, and as a replacement for, neural architecture search, network pruning, and other growth methods.
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
General architecture searches can find network sizes and shapes that are appropriate for a set of problems with a given complexity [4]. These methods train many neural networks, either simultaneously or in sequence, to discover the right architecture for a given problem. For example, [5] uses neural architecture search to find a more efficient and performant transformer-like network for translation tasks. While neural-architecture search can produce more efficient networks, the search itself is computationally intensive, as it necessitates the training of many networks.
Network pruning is another method to find suitable network structures, but often requires less computational cost than architecture searches. Pruning is particularly popular for finding small, performant networks for fast inference. Several criteria have been proposed for choosing which nodes and at what time to prune ([6], [3]). These methods can help find performant, smaller networks, but they often require training much larger networks to find the most performant subnetwork ([1]). Furthermore, pruning methods may produce small, neural networks that perform well on a particular problem, but do not adapt well to changes in data.
There has been less research into network growth methods than into architecture search or pruning, but network growth has benefits such as allowing for smaller initial networks and fitting distribution shifts in the data with additional growth during training. To grow a neural network, we must decide: 1. when to grow and 2. how to grow. To help determine when to grow a neural network, the method presented in [7] grows a neural network when the loss function is experiencing a plateau to help escape gradient-descent slow-downs. To determine how to grow, the method in [8] presents Gradient Maximizing Growth to maximize the gradients of new weights to accelerate training. Focusing on non-stochastic problems, the method presented in [9] and [10] solves for when and how to grow neural networks based on the network error, but it is not easily extensible to problems in which loss cannot be driven to zero. Most similar to the work presented here, AdaNet [11] approach iteratively grows a neural network before training. It uses the Rademacher complexity to approximate the generalization error of neural network and trains the network predicted to give the least generalization error.
The method presented has similar growth mechanisms to AdaNet, but the method here increases the network size during training by focusing on the network's Mean Squared Error (MSE). Instead of approximating the generalization error as in AdaNet, we train a residual network to approximate the loss of the base neural network. If the residual network can predict structure in the loss of the base network, the algorithm infers that some amount of network complexity is missing from the base neural network, and we combine the two networks into one and continue training. The algorithm then initializes a new residual network and fits the error of the base network again.
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
## 3 Method
When training a fully connected, multi-layer perceptron with \(n\) hidden layers, we initialize a narrower neural network that also has \(n\) hidden layers. We call this network the residual neural network. After each training epoch, we train the residual network to fit the current residuals of the larger network (Fig 1).
If adding in the predicted residuals to the larger network's predictions improves the MSE above a certain user-defined threshold, we increase the latent size of the neural networks by fusing the residual network to the base neural network together. The two, joined networks create a new, fully connected neural network with hidden-layer widths equal to the number of nodes in the base network's hidden layer plus the number of nodes in the residual network's hidden layer.
The weights of the first layer and the output layer are completely defined by the base neural network and the residual network. However, the internal, hidden layers have new connections between the base nodes and the residual network's nodes. These weights need to be initialized. Initializing these weights to zero leads to the smallest MSE after growth, but the backpropagation algorithm will cause these weights to remain equal to each other. Here, we initialize the new weights randomly to have a one tenth the magnitude to the existing weights in the residual network.
After fusing the networks together, we continue to train the larger network. Additionally, we reset the weights of the residual neural network and continue to fit the residual network to the larger network's residuals. This algorithm is detailed in Algorithm 1.
Since this technique focuses on fitting predictable error in the residuals of the network, it is prone to overfitting. One technique to reduce overfitting and the subsequent growth of the neural networks is to use dropout layers during training in both the base neural network and the residual network. In some experiments, networks with dropout perform worse than networks without. So we don't need to rely on dropout to stop excessive network growth, we include a growth criterion that the current mean squared error must be over the growth threshold from the last time the network grew. So if the network last grew at
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
Figure 1: Base Network and Residual Network Diagram
a MSE of 10, and the growth threshold is 10%, the base network must achieve an MSE of 9 before it can grow again.
```
Training data: \(\{X_{i}\}\), \(\{Y_{i}\}\), adaptation threshold \(\gamma\)\(f(x)\) Neural Network \(g(x)\) Residual Network \(\alpha_{prev}\gets 1\times 10^{6}\)\(\triangleright\) set \(\alpha_{prev}\) to some large number for\(i\) in number of epochs do Fit \(f(x)\) to training data \(\{X_{i}\}\), \(\{Y_{i}\}\). Collect residuals \(\{r_{i}\}\) Fit \(g(x)\) to \(\{X_{i}\}\), \(\{r_{i}\}\) \(\alpha\leftarrow\text{MSE}(f(x))\) \(\beta\leftarrow\text{MSE}(f(x)+g(x))\) if\(\beta/\alpha<1-\gamma\) and \(\alpha/\alpha_{prev}<1-\gamma\)then Grow network by combining weights of \(f(x)\) and \(g(x)\) Reset weights of \(g(x)\) \(\alpha_{prev}=\alpha\) endif endfor
```
**Algorithm 1** SANN Algorithm
## 4 Results
### Cifar
Following AdaNet [11], we compare the performance of these networks on CIFAR-10 classification using histogram data from the CIFAR images. The state-of-the art method for image classification typically include convolutional neural networks, but following [11], we use histograms of the image data to test the viability of this method. In these CIFAR-10 experiments, we only use histograms of each color distribution, with 40 bins per color, 120 bins total. We evaluate this method on a hold-out set. We test the performance of small fixed networks, small growing networks, large fixed networks, and large growing networks. We test the performance of the large growing networks to ensure that growth does not always occur at all network sizes. We run 10 training runs for each pairwise classification experiment for each network type.
For the classification of deer vs truck, the growing network performs better than the small, fixed network and similarly to the large networks (Figure 2). The large, growing network grows in size a small amount, but maintains a similar mean squared error to the large, fixed network.
In testing the networks' performance on the bird vs frog classification problem, we see that the training does not induce as much growth as that seen in the deer vs truck problem (Figure 3). The growing network, again, performs better than the small, fixed network and similarly to the larger networks.
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
For the cat vs dog classification problem, neither the small nor large networks typically grow (Figure 4). This lack of growth means that the residual network typically does not find large error in the loss that the base network is not fitting. The smaller network performs worse than the larger network, but this difference is relatively small (less than 0.01 in mean loss). It is possible that the color histograms for cat vs dog classification do not contain enough information for the neural network to make complex classification decisions. In this case, the neural network for both the small and large networks do not grow.
### Imitation Learning
We use imitation learning as another testing task for the adaptive neural network method. We compare performance of networks of varying sizes within the DAgger algorithm [12]. DAgger is a popular method of imitation learning when one has access to the environment and to the expert itself during training. We run the DAgger algorithm to imitate an expert within the PyUXV environment, also used in [13]. In this environment, the agent must navigate around obstacles to reach a goal. Here, we imitate the commands needed to avoid the obstacles.
With 10 runs in each condition, we find that the growing network trains faster than the large and small fixed networks Figure 5. Also, the growing network achieves a similar score to the large network, which is above the score of the small network.
We use Mujoco as a baseline for testing the adaptive networks performance in behavior cloning. We consider two environments: Ant and Half Cheetah, and perform 10 iterations for each conditions in the two environments. We train on 10 expert trajectories and then validate against 10 unseen trajectories. The expert trajectories were generated using [14]. We do not use dropout in these experiments as we found that dropout reduced the performance of the networks. Instead, we prevent runaway growth by preventing a network from growing again until it achieves its MSE improvement threshold.
In both Ant and Half Cheetah, the MSE of the growing network jumps at certain moments during training. These jumps occur right after the network grows as the evaluation occurs before the network's new, randomly initialized weights have been trained.
For the Ant environment, the growing network provides gains in MSE to the small, fixed network and a similar MSE to the large, growing network (Figure 6). The three networks attain a similar score in the environment, but the large-growing error has less variance in the score.
In the Half Cheetah environment, the network on average grows to be larger than the large, fixed network. However, before reaching the larger size, the growing network achieves a similar MSE to the larger network (ignoring the steps in which one of the network grows and spikes the MSE). The imitators with growing network appears to achieve an improved MSE to the large, fixed
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
Figure 5: PyUXV with DAgger
network (Figure 7). Also, imitators with small, growing network achieve the highest average environmental rewards.
## 5 Adaptive Networks in Reinforcement Learning
We apply the same adaptive-network technique to the value function within the reinforcement-learning algorithm PPO ([15]). We grow the size of the value-function network if its fit can be improved by the addition of the residual network. In all the experimental conditions, the policy network is fixed to have two hidden layers that are each 64 nodes in width while the value function has two hidden layers with varying widths. We do not use dropout in these experiments, and without dropout, the networks can experience continuing growth, even for large networks.
We train using PPO for the varying value networks in both the Mujoco Ant and Half Cheetah Environment. We trained five runs for each condition in each environment. In the Ant environment, we see that the growing network performs similarly to the large network (Fig 8). However, the PPO runs with a smaller value network outperforms the runs with both the larger and growing value networks. In the Half Cheetah environment, the PPO runs with the growing network begin by performing similarly to the small network, but as the value
Figure 6: Behavior Cloning on Ant
Figure 7: Behavior Cloning on Half Cheetah
network grows, the score for these runs begin to approach the score of the PPO runs with the large value network (Fig 9).
It appears that the PPO runs with growing value networks can perform similarly to the PPO runs with larger networks, but may take more time reach a similar proficiency. More work is required to further explore the effectiveness of these growing value networks and to reduce the unnecessary growing of these networks.
## 6 Conclusion
Here we studied the technique of fitting a small neural network to the residuals of another neural network and combining those networks if the residual network finds significant, predictable error in the base network's fit. Using test cases from classification, imitation learning, and reinforcement learning, we find that this technique can produce effective networks without having preexisting knowledge of the problem's complexity.
In many cases this method of fitting residuals can achieve evaluation losses
NAVAIR Public Release 2022-360. Distribution Statement A - "Approved for public release; distribution is unlimited"
Figure 8: Reinforcement Learning on Ant
Figure 9: Reinforcement Learning on Cheetah
that are similar to much larger multi-layer perceptrons for several types of problems. In fitting CIFAR data, we find that the networks would grow to different sizes for different problems, which may reflect the available information in the histogram data. In imitation learning, we find that the adaptive networks used for behavioral cloning and within the DAgger algorithm can achieve similar effectiveness to networks of fixed size that began with much wider hidden layers. The reinforcement-learning runs using an adaptive value networks appear to train more slowly that the larger, fixed networks, but can also achieve similar results.
Future work will focus on improving the initialization of new weights during the network combination process using methods like [8]. We would also like to implement growth with varying network depth like in AdaNet ([11]).
## Acknowledgement
This research is in part funded by the Test Resource Management Center (TRMC) and Test Evaluation/Science & Technology (T&E/S&T) Program under contract W900KK-19-C-004.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Test Resource Management Center (TRMC) and Test Evaluation/Science & Technology (T&E/S&T) Program.
Distribution Statement A - Approved for public release.
|
2310.04469 | Taming Binarized Neural Networks and Mixed-Integer Programs | There has been a great deal of recent interest in binarized neural networks,
especially because of their explainability. At the same time, automatic
differentiation algorithms such as backpropagation fail for binarized neural
networks, which limits their applicability. By reformulating the problem of
training binarized neural networks as a subadditive dual of a mixed-integer
program, we show that binarized neural networks admit a tame representation.
This, in turn, makes it possible to use the framework of Bolte et al. for
implicit differentiation, which offers the possibility for practical
implementation of backpropagation in the context of binarized neural networks.
This approach could also be used for a broader class of mixed-integer
programs, beyond the training of binarized neural networks, as encountered in
symbolic approaches to AI and beyond. | Johannes Aspman, Georgios Korpas, Jakub Marecek | 2023-10-05T21:04:16Z | http://arxiv.org/abs/2310.04469v3 | # Taming Binarized Neural Networks
###### Abstract
There has been a great deal of recent interest in binarized neural networks, especially because of their explainability. At the same time, automatic differentiation algorithms such as backpropagation fail for binarized neural networks, which limits their applicability. By reformulating the problem of training binarized neural networks as a subadditive dual of a mixed-integer program, we show that binarized neural networks admit a tame representation. This, in turn, makes it possible to use the framework of Bolte et al. for implicit differentiation, which offers the possibility for practical implementation of backpropagation in the context of binarized neural networks. This approach could also be used for a broader class of mixed-integer programs, beyond the training of binarized neural networks, as encountered in symbolic approaches to AI and beyond.
## 1 Introduction
There has been a great deal of recent interest in binarized neural networks (BNNs) [1, 2, 3], due to their impressive statistical performance [4, e.g.], the ease of distributing the computation [1, 2, 5, e.g.], and especially their explainability. This latter property, which is rather rarely encountered in other types of neural networks, stems precisely from the binary representation of the weights and activation functions of the network, which can be seen as logical rules. This explainability is increasingly mandated by regulation of artificial intelligence, including the General Data Protection Regulation and the AI Act in the European Union, and the Blueprint for an AI Bill of Rights pioneered by the Office of Science and Technology Policy of the White House.
The training of BNNs typically utilizes the Straight-Through-Estimator (STE) [6, 2, 4, 7, 8, 9, 10, 11], which is challenging [12]. The challenge comes from the fact that the weight updates in the back-propagation do not correspond to subgradients of the forward paths [12].
To address this challenge, we draw a new relationship between binarized neural networks and so-called _tame geometry_[13] in this article. We introduce a certain reformulation of the training of BNN, which allows us to make use of the results of implicit differentiation and non-smooth optimization when training the BNNs [14, 15, 16, 17] and, eventually, to obtain weight updates in the back-propagation that do correspond to subgradients of the forward paths in common software frameworks built around automated differentiation, such as TensorFlow or PyTorch. This builds on a long history of work on _tame topology_ and _o-minimal structures_[18, 13, 19, 20, 21, 22, 23, 24, 25, e.g.], long studied in topology, logic, and functional analysis.
Our reformulation proceeds as follows: In theory, the training of BNNs can be cast as a mixed-integer program (MIP). We formulate its sub-additive dual, wherein we leverage the insight that conic MIPs admit a strong dual in terms of non-decreasing subadditive functions. We show that this dual problem is tame, or definable in an o-minimal structure. This, in turn, makes it possible for the use of powerful methods from non-smooth optimization when training the BNN, such as a certain generalized derivative of [15] that comes equipped with a chain rule. Thus, one can use backpropagation, as usual in training of neural networks.
In the process, we establish a broader class of _nice_ MIPs that admit such a tame reformulation. A MIP is nice if its feasible set is compact, and the graph of the objective function has only a finite number of non-differentiable points. This class could be of independent interest, as it may contain a number of other problems, such as learning causal graphs, optimal decision trees, or certain problems in symbolic regression. We hope that this could bring closer the symbolic approaches, which can often be cast as MIPs, and approaches based on neural networks and backpropagation.
## 2 Background
Let us start by introducing the relevant background material. We begin by introducing the relevant notions of BNNs, MIPs, and their subadditive dual. We discuss how the BNN can be recast as a MIP, and thus, by strong duality, how training the BNN relates to a maximization problem over a set of subadditive functions. Our main goal is to link the BNN with tame geometry, and therefore we discuss the relevant background on o-minimal structures. Finally, we discuss results on implicit differentiation for tame functions, which offers a practical way of training the BNN once we have established its tameness.
Binarized Neural NetworksThere is some ambiguity in the literature as to what constitutes a binarized neural network (BNN). We will follow [26] and refer to a BNN as a neural network where the activation functions take values in the binary set \(\{0,1\}\). A BNN is characterized by a vector \(L=(L_{0},\ldots,L_{n})\) with \(|L|=n\) layers where each layer contains \(L_{\ell}\in\mathbb{N}_{>0}\) neurons \(x_{i}^{(\ell)}\), see Fig. 1. We allow the input layer \(x_{i}^{(0)}\) to take any real values, \(x_{i}^{(0)}\in\mathbb{R}\), while due to binarized activations, the following layers will have \(x_{i}^{(j>0)}\in\{0,1\}\). The neuron \(x_{i}^{(\ell)}\) in the layer \(\ell\) is connected with the neuron \(x_{j}^{(\ell+1)}\) in the layer \(\ell+1\) via a weight coefficient matrix \(w^{(\ell)}\in\mathbb{R}^{L_{\ell}\times L_{\ell+1}}\). Consider an
input vector \(\mathbf{x}=(x_{1}^{(0)},\ldots,x_{L_{0}}^{(0)})\). The _preactivation function_ of the BNN is given as
\[a_{j}^{(\ell+1)}(\mathbf{x})=\sum_{i\in L_{\ell+1}}w_{ij}^{(\ell)}\sigma_{j}^{(\ell) }(\mathbf{x}), \tag{1}\]
where \(\sigma^{(\ell)}(\mathbf{x})\) is the _activation_ function at layer \(\ell\) with
\[\sigma_{j}^{(\ell)}(\mathbf{x})=\begin{cases}\mathbf{x}&\text{if }\ell=0,\\ 1&\text{if }\ell>0\text{ and }a_{j}^{(\ell)}(\mathbf{x})\geq\lambda_{\ell},\\ 0&\text{otherwise},\end{cases} \tag{2}\]
where \(\lambda_{\ell}\in\mathbb{R}\) is a learnable parameter. Note again that the activation functions of all the neurons in the network of our BNN are constrained in the set \(\{0,1\}\) except for the input layer neurons. This set can be mapped to \(\{-1,1\}\) by a redefinition \(\tilde{\sigma}_{j}^{(\ell)}=2\sigma_{j}^{(\ell)}-1\).
The BNN can be viewed as a weight assignment \(w=\{w^{(1)},\ldots,w^{(L)}\}\) for a function
\[f_{w}:\mathbb{R}^{L_{0}} \rightarrow\{-1,1\}^{L_{n}} \tag{3}\] \[\mathbf{x}^{(0)} \mapsto\hat{\mathbf{y}}, \tag{4}\]
where \(\hat{\mathbf{y}}=\mathbf{x}^{(L_{N})}\) is the vector of output layer neurons. BNNs are trained by finding an optimal weight assignment \(W\) that fits and generalizes a training set \(S=\{(\mathbf{x}_{1},\mathbf{y}_{1}),\ldots,(\mathbf{x}_{m},\mathbf{y}_{m})\}\). The traditional approaches of backpropagation and gradient descent methods in usual deep learning architectures cannot be used directly for training BNNs. For optimizers to work as in standard neural network architectures, real-valued weights are required, so, in practice, when binarized weights and/or activation functions are utilized, one still uses real-valued weights for the optimization step. Another problem is related to the use of deterministic functions (2) or stochastic functions [1] for binarization. During backpropagation, the derivative of these functions is zero, which flattens the gradient. A common solution to these problems is to use the Saturated STE (Straight Through Estimator) [27] (see also [28]). Other possible
Figure 1: A BNN with \(|L|=4\) layers, \(L=(2,3,3,2)\). Each \(x_{j}^{(\ell)}\), for all \(j\) and \(\ell\), is constrained to take values in \(\{-1,1\}\) while the activations represented by the edges joining the nodes take values in \(\{-1,0,1\}\).
solutions include the Expectation BackPropagation (EBP) algorithm [29] which is a popular approach to training multilayer neural networks with discretized weights, and Quantized BackPropagation (QBP) [30]. Ref. [12] presents a comprehensive practical survey on the training approaches for BNNs. In this article, we suggest that BNNs can be efficiently trained using nonsmooth implicit differentiation [16].
Mixed-Integer ProgrammingA mixed-integer (linear) program (MIP) is an optimization problem of the form
\[\begin{array}{ll}\max&cx+hy\\ \mathrm{s.t.}&Ax+Gy\leq b\\ &x\in\mathbb{Z}_{\geq 0}\\ &y\in\mathbb{R}_{\geq 0}.\end{array} \tag{5}\]
As illustrated in Figure 2, the feasible set is a subset of the intersection of a polyhedron with the integral grid.
Recasting a BNN as a MIPThe interactions and relations between BNNs and MILPs have been studied in recent literature. For example, in Ref. [31] BNNs with weights restricted to \(\{-1,1\}\) are trained by a hybrid method based on constraint programming and mixed-integer programming. Generally, BNNs with activation functions taking values in a binary set and with arbitrary weights can be reformulated as a MIP [26]. However, the precise form of the corresponding MIP depends on the nature of the loss function. Generally, a loss function for a BNN with \(n\) layers is a map
\[\mathscr{L}:\{0,1\}\times\mathbb{R}^{L_{n}}\to\mathbb{R}, \tag{6}\]
Figure 2: An illustrative example of a mixed-integer set as subset of \(\mathbb{Z}\times\mathbb{R}\). This is given as the feasible set of the MIP (5).
which allows the BNN to be represented as
\[\begin{array}{ll}\min&\sum_{i=1}^{m}\mathscr{L}\left(y_{i},\hat{y}_{i}\right) \\ \text{s.t.}&\hat{y}^{i}=a^{L}\left(w^{(L)}a^{(L-1)}\left(\ldots a^{(1)}\left(w^{ (1)}x_{i}\right)\ldots\right)\right)\\ &w^{(\ell)}\in\mathbb{R}^{L_{\ell}\times L_{\ell+1}},\quad\forall\ell\\ &\lambda_{k}\in\mathbb{R},\quad\forall\ell.\end{array} \tag{7}\]
The loss function \(\mathscr{L}\) can be chosen in different ways; for example the 0-1 loss function \(\mathscr{L}(\hat{y},y)=I_{\hat{y},y}\), where \(I\) is the indicator function, or the square loss \(\mathscr{L}(\hat{y},y)=\|\hat{y}-y\|^{2}\).
**Lemma 1** (Rephrasing Theorem 2 in [26]).: _Training BNNs admits a representation as a mixed-integer program. For the classification variant and empirical error as a loss function, the mixed-integer program is linear. For the regression variant and linearly-representable loss functions, the mixed-integer program is linear._
For the proof of Lemma 1, the authors of [26] that for any real-valued solution \(w=\{w^{(1)},\ldots,w^{(L)}\}\) with real-valued \(\lambda\) there exists a \([0,1]\)-valued redefinition of these variables such that the input of each neuron at each layer is the same. Therefore, we can consider weights \(w\) and parameters \(\lambda\) being \([0,1]\)-valued. Furthermore, certain constrains need to be applied so as to bound from above and below the output of each activated neuron. For simplicity, consider the case of a unique input vector \(x\equiv x^{(0)}\). Then, they define \(x^{(\ell)}\equiv u^{(\ell)}\) such that \(\hat{y}=u^{(L)}\). This yields the following formulation of the BNN as a non-linear MIP:
\[\begin{array}{ll}\min&\mathscr{L}(y,u^{(L)})\\ \text{s.t.}&w^{(1)}x<M_{1}u^{(1)}+\lambda_{1}\\ &w^{(1)}x\geq M_{1}(u^{(1)}-1)+\lambda_{1}\\ &w^{(\ell)}u^{\ell-1}<M_{\ell}u^{\ell}+\lambda_{\ell},\quad\forall\ell>1\\ &w^{(\ell)}u^{(\ell-1)}\geq M_{\ell}(u^{(\ell)}-1)+\lambda_{\ell},\quad\forall \ell>1\\ &w^{(\ell)}\in[-1,1]^{d_{\ell}\times d_{\ell-1}}\\ &\lambda_{\ell}\in[-1,1]\forall\ell\in[L]\\ &u^{(\ell)}\in\{0,1\}^{d_{\ell}}\quad\forall\ell\in[L]\end{array} \tag{8}\]
where \(M_{1}:=(nr+1)\), \(\|x\|<r\) a Euclidean norm bound, \(n\) the dimension of \(x\), and \(M_{\ell}:=(d_{\ell-1}+1)\). By linearizing the products \(w^{(\ell)}x^{(\ell-1)}\) through the introduction of extra variables \(s^{(\ell)}_{ij}\in[-1,1]\) we get a MILP formulation of the BNN.
**Theorem 1** (MILP, paraphrasing Theorem 2 of [26]).: _The BNN (7) can be recast as a
mixed-integer linear program, given by_
\[\min \sum_{i=1}^{m}\mathscr{L}\left(y,u^{(L)}\right)\] (9) s.t. \[w^{(1)}x^{i}<M_{1}u^{(1)}+\lambda_{1}\] \[w^{(1)}x^{i}\geq M_{1}\left(u^{(1)}-1\right)+\lambda_{1}\] \[\sum_{l=1}^{d_{k-1}}s_{l}^{(k)}<M_{k}u^{(k)}+\lambda_{k},\quad \forall k\in[L]\backslash\{1\}\] \[\sum_{l=1}^{d_{k-1}}s_{l}^{(k)}\geq M_{k}\left(u^{(k)}-1\right)+ \lambda_{k},\quad\forall k\in[L]\backslash\{1\}\] \[s_{lj}^{(k)}\leq u_{j}^{(k)},\quad s_{lj}^{(k)}\geq-u_{j}^{(k)},\] \[\forall k\in[L]\backslash\{1\},l\in[d_{k-1}]\,,j\in[d_{k}]\] \[s_{lj}^{(k)}\leq w_{lj}^{(k)}+\left(1-u_{j}^{(k)}\right),\] \[\forall k\in[L]\backslash\{1\},l\in[d_{k-1}]\,,j\in[d_{k}]\] \[W^{k}\in[-1,1]^{d_{k}\times d_{k-1}}\quad\forall k\in[L]\] \[\lambda_{k}\in[-1,1]\quad\forall k\in[L]\] \[u^{i,k}\in\{0,1\}^{d_{k}}\quad\forall k\in[L],i\in[m]\] \[s_{l}^{i,k}\in[-1,1]^{d_{k}}\quad\forall i\in[m],k\in[L] \backslash\{1\},l\in[d_{k-1}]\]
This gives the first step in our aim to link the theory of tame geometry, or o-minimality, to BNNs. The next step is to look at the dual problem of this MILP.
Subadditive dualIn the context of MIPs, the notion of duality is much more involved than in convex optimization. Only recently [32, 33], it is emerging that subadditive duals [34, 35, 36, 37, 38, 39] can be used to establish strong duality for MIPs. To introduce the subadditive dual, we use the modern language of [33, 32]:
**Definition 1** (Regular cone).: _A cone \(K\subseteq\mathbb{R}^{m}\) is called regular if it is closed, convex, pointed and full-dimensional._
If \(x-y\in K\), we write \(x\succeq_{K}y\) and similarily, if \(x\in\text{int}(K)\) we write \(x\succ_{K}0\).
**Definition 2** (Subadditive and non-decreasing functions).: _A function \(f\,:\,\mathbb{R}^{m}\to\mathbb{R}\) is called:_
* subadditive _if_ \(f(x+y)\leq f(x)+f(y)\) _for all_ \(x,y\in\mathbb{R}^{m}\)
* non-decreasing _with respect to a regular cone_ \(K\subseteq\mathbb{R}^{m}\) _if_ \(x\succeq_{K}y\implies f(x)\geq f(y)\)_._
The set of subadditive functions that are non-decreasing with respect to a regular cone \(K\subseteq\mathbb{R}^{m}\) is denoted \(\mathcal{F}_{K}\) and for \(f\in\mathcal{F}_{K}\) we further define \(\bar{f}(x)\coloneqq\limsup_{\delta\to 0^{+}}\frac{\bar{f}(\delta x)}{\delta}\). Note that this is the upper \(x\)-directional derivative of \(f\) at zero.
Let us start by stating the relation between subadditive functions and MIPs. To this end, we consider a generic conic MIP,
\[z^{*}\coloneqq \inf c^{T}x+d^{T}y, \tag{10}\] \[\text{s.t. }Ax+Gy\succeq_{K}b,\] \[x\in\mathbb{Z}^{n_{1}},\] \[y\in\mathbb{R}^{n_{2}}.\]
Note that problem (10) is a generalization of the primal form of a MILP, as in Thm. 1, which is recovered by setting \(K=\mathbb{R}^{m}_{+}\). We define the subadditive dual problem of (10) as
\[\rho^{*}\coloneqq \sup f(b), \tag{11}\] \[\text{s.t. }f(A^{j})=-f(-A^{j})=c_{j},\quad j=1,\ldots,n_{1},\] \[\bar{f}(G^{k})=-\bar{f}(-G^{k})=d_{k},\quad k=1,\ldots,n_{2},\] \[f(0)=0,\] \[f\in\mathcal{F}_{K},\]
where \(A^{j}\) and \(G^{j}\) denotes the \(j\)'th column of the matrices \(A\) and \(G\), respectively, and \(c_{j},d_{k}\) are the components of the corresponding vectors from the primal MIP.
In general, the subadditive dual (11) is a weak dual to the primal conic MIP (10), where any dual feasible solution provides a lower bound for the optimal value of the primal [40, 41, 33]. Under the assumptions of feasibility, strong duality holds:
**Theorem 2** (Theorem 3 of [32]).: _If the primal conic MIP (10) and the subadditive dual (11) are both feasible, then (11) is a strong dual of (10). Furthermore, if the primal problem is feasible, then the subadditive dual is feasible if and only if the conic dual of the continuous relaxation of (10) is feasible._
That is: Theorem 2 provides a sufficient condition for the subadditive dual to be equivalent to (10). A sufficient condition for the dual feasibility is that the conic MIP has a bounded feasible region.
Properties of subadditive functionsTo show our main result, Theorem 4, we will need to introduce some structural properties of subadditive functions. These are discussed in detail in [42, 43, 44]. For example, if \(f,g\) are two non-decreasing subadditive functions on \(\mathbb{R}^{m}\), then the following hold:
* \(f+g\) is subadditive;
* the composition \(g\circ f\) is subadditive;
* if further \(f\) is non-negative and \(g\) positive on the positive quadrant \(\mathbb{R}^{m}_{+}\) then \(f(x)g(x)\) is subadditive on \(\mathbb{R}^{m}_{+}\).
Let us note that, when we set \(K=\mathbb{R}^{m}_{+}\) in (11) we have that \(f(x)\) is non-negative on \(\mathbb{R}^{m}_{+}\) due to the combination of being non-decreasing, subadditive and having the condition \(f(0)=0\).
Following [44], we define properties NT (as in "no trumps") and WNT (for "weak no trumps"):
**Definition 3** (Nt, Def. 1 in [44]).: _For a family \((A_{k})_{k\in\mathbb{N}}\) of subsets of \(\mathbb{R}^{n}\) we say that \(\textbf{NT}(A_{k})\) holds, if for every bounded/convergent sequence \(\{a_{j}\}\) in \(\mathbb{R}^{n}\) some \(A_{k}\) contains a translate of a subsequence of \(\{a_{j}\}\)._
**Definition 4** (Wmt, Def. 2 in [44]).: _Let \(f\,:\,\mathbb{R}^{n}\to\mathbb{R}\). We call \(f\) a **WNT**-function, or \(f\in\textbf{WNT}\), if \(\textbf{NT}(\{F^{j}\}_{j\in\mathbb{N}^{*}})\) holds, where \(F^{j}\coloneqq\{x\in\mathbb{R}^{n}\,:\,|f(x)|<j\}\)._
We have the following theorem by Csiszar and Erdos [45], nicely explained in [46]:
**Theorem 3** (No trumps theorem, [45]).: _If \(T\) is an interval and \(T=\bigcup_{j\in\mathbb{N}^{*}}T_{j}\) with each \(T_{j}\) measurable/Baire, then \(\textbf{NT}(\{T_{k}\,:\,k\in\mathbb{N}^{*}\})\) holds._
Here, Baire refers to the functions having "the Baire property", or the set being open modulo some meager set. Note that this is not necessarily related to being definably Baire as in Def. 7.
The following properties are shown in [44]:
* If \(f\) is subadditive and locally bounded above at a point, then it is locally bounded at every point.
* If \(f\in\textbf{WNT}\) is subadditive, then it is locally bounded.
* If \(f\in\textbf{WNT}\) is subadditive and \(\inf_{t<0}f(tx)/t\) is finite for all \(x\), then \(f\) is Lipschitz.
Tame topologyThe subject of tame topology goes back to Grothendieck and his famous "Esquisse d'un programme" [18]. Grothendieck claimed that modern topology was riddled with false problems, which he ascribed to the fact that much of modern progress had been made by analysts. What he proposed was the invention of a geometers version of topology, lacking these artificial problems from the onset. In later years, this tame topology has been linked to the model theoretic concept of o-minimal structures, which promises to be a good candidates for Grothendieck's dream.
o-minimal structures are a generalisation of the (semi-)algebraic sets, or the sets of polynomial equations (and inequalities). As such they provide us with a large class of sets and functions that are in general non-smooth and non-convex, while capturing most (if not all) of the popular settings used in modern neural networks and machine learning [14].
An o-minimal structure over \(\mathbb{R}\) is a collection of subsets of \(\mathbb{R}^{m}\) that satisfies certain finiteness properties, such as closure under boolean operations, closure under projections and fibrations. Formally,
**Definition 5** (o-minimal structure).: _An o-minimal structure on \(\mathbb{R}\) is a sequence \(\mathcal{S}=(\mathcal{S}_{m})_{m\in\mathbb{N}}\) such that for each \(m\geq 1\):_
1. \(\mathcal{S}_{m}\) _is a boolean algebra of subsets of_ \(\mathbb{R}^{m}\)_;_
2. _if_ \(A\in\mathcal{S}_{m}\)_, then_ \(\mathbb{R}\times A\) _and_ \(A\times\mathbb{R}\) _belongs to_ \(\mathcal{S}_{m+1}\)_;_
3. \(\mathcal{S}_{m}\) _contains all diagonals, for example_ \(\{(x_{1},\ldots,x_{m})\in\mathbb{R}^{m}\,:\,x_{1}=x_{m}\}\in\mathcal{S}_{m}\)_;_
4. _if_ \(A\in\mathcal{S}_{m+1}\)_, then_ \(\pi(A)\in\mathcal{S}_{m}\)_;_
5. _the sets in_ \(\mathcal{S}_{1}\) _are exactly the finite unions of intervals and points._
Typically, we refer to a set included in an o-minimal structure as being _definable_ in that structure, and similarly, a function, \(f\,:\,\mathbb{R}^{m}\to\mathbb{R}^{n}\), is called definable in an o-minimal structure whenever its corresponding graph, \(\Gamma(f)=\{(x,y)\,|\,f(x)=y\}\subseteq\mathbb{R}^{m\times n}\), is definable. A set, or function, is called _tame_ to indicate that it is definable in some o-minimal structure, without specific reference to which structure.
The moderate sounding definition of o-minimal structures turns out to include many non-trivial examples. First of all, by construction we of course have that the collection of semialgebraic sets is an o-minimal structure, denoted \(\mathbb{R}_{\text{semialg}}\). If this was the only example of an o-minimal structure, it would not have been a very interesting construction. The research in o-minimal structures really took off in the middle of the nineties after Wilkie [47] proved that we can add the graph of the real exponential function, \(x\mapsto e^{x}\), to \(\mathbb{R}_{\text{semialg}}\) to again find an o-minimal structure, denoted \(\mathbb{R}_{\text{exp.}}\). As a result, the sigmoid function, which is a prevalent activation function in numerous neural networks, can be considered tame. Another important structure is found by including the set of restricted real analytic functions, where the domain of an analytic function is restricted to lie in a finite subset of the original domain in a particular way. This gives rise to an o-minimal structure denoted \(\mathbb{R}_{\text{an.}}\)[48]. A classical example of this would be the function \(\sin(x)\), where we restrict \(x\) to lie in a finite interval \(x\in[0,\alpha]\subset\mathbb{R}\) for some \(\alpha<\infty\). Note that without this restriction on the argument, \(\sin(x)\) is not tame. Furthermore, we can construct a very important o-minimal structure by combining \(\mathbb{R}_{\text{exp.}}\) with \(\mathbb{R}_{\text{an.}}\). This gives the structure denoted \(\mathbb{R}_{\text{an,exp.}}\)[48]. It is important to note that the fact that \(\mathbb{R}_{\text{an,exp.}}\) is an o-minimal structure is a non-trivial result. It does not generally hold that the combination of two o-minimal structures gives another o-minimal structure.
We thus see that o-minimal structures captures a very large class of, generally, non-smooth non-convex functions. More importantly, they include in principle all classes of functions widely used in modern machine learning applications. The great benefit of this class is that they are still _nice_ enough such that we can have some control over their behaviour and for example prove convergence to optimal points [49, 14, 15, 50, 51].
Perhaps the most fundamental results regarding o-minimal structures are the monotonicity and cell decomposition theorems. The former states that any tame function of one variable can be divided into a _finite_ union of open intervals, and points, such that it is continuous and either constant or strictly monotone on each interval. The cell decomposition theorem generalizes this to higher dimensions by introducing the concept of a cell, which is the analogue of the interval or point in one dimension. The theorem
then states that any tame function or set can be decomposed into a finite union of definable cells. A related notion is that of stratification of a set. Generally, a stratification is a way of partitioning a set into a collection of submanifolds called strata. There exists many different types of stratifications, characterized by how the different strata are joined together. Two important such conditions are given by the Whitney and Verdier stratifications. Both of these are applicable to tame sets [52, 53]. These results are at the core of many of the strong results on tame functions in non-smooth optimization.
Some more details on tame geometry and o-minimality are given in the supplementary material.
Locally o-minimal structuresThere exists a few variants of weakenings of the o-minimal structures. One such example is what is called _locally o-minimal structures_[21, 22, 23, 24, 25].
**Definition 6** (Locally o-minimal structure, [22]).: _A definably complete structure \(\mathbb{K}\) extending an ordered field is locally o-minimal if, for every definable function \(f:\mathbb{K}\to\mathbb{K}\), the sign of \(f\) is eventually constant._
Here, definably complete means that every definable subset of \(\mathbb{K}\) has a supremum in \(\mathbb{K}\sqcup\{\pm\infty\}\) and \(X\subseteq\mathbb{K}\) is nowhere dense if \(\text{Int}(\bar{X})\) is empty. Every o-minimal expansion of an ordered field is a definably complete structure (but the converse is not true). Note also that every o-minimal structure is locally o-minimal [22]. Locally o-minimal structures satisfy a property called _definably Baire_:
**Definition 7** (Definably Baire, from [24]).: _A definably complete structure \(\mathbb{K}\) expanding an ordered field is definably Baire if \(\mathbb{K}\) is not the union of a definable increasing family of nowhere dense subsets._
Finally, when we work with structures expanding \((\mathbb{R},+,\cdot,<)\) we have that local o-minimality implies o-minimality [23], while the same is generically not true when we do not have multiplication.
Non-smooth differentiationBolte et al., [15], introduced a generalized derivative, called a conservative set-valued field, for non-smooth functions. The main idea behind this construction is that the conservative fields come equipped with a chain rule. Namely, given a locally Lipschitz function \(f:\,\mathbb{R}^{m}\to\mathbb{R}\), we say that \(D:\,\mathbb{R}^{m}\rightrightarrows\mathbb{R}^{m}\) is a conservative field for \(f\) if and only if the function \(t\mapsto f(x(t))\) satisfies
\[\frac{\mathrm{d}}{\mathrm{d}t}f(x(t))=\langle v,\dot{x}(t)\rangle,\quad\forall v \in D(x(t)), \tag{12}\]
for any absolutely continuous curve \(x:\,[0,1]\to\mathbb{R}^{m}\) and for almost all \(t\in[0,1]\). Having a chain rule is key for applications to backpropagation algorithms and automatic differentiation in machine learning.
Automatic differentiation for non-smooth elementary functions is subtle and even the well-known Clarke generalized gradient is known to introduce complications in this setting. Having a derivative flexible enough to include automatic differentiation was therefore indeed the main motivation behind the work of Bolte et al. In many
ways, we can see the conservative fields as a generalization of the Clarke derivatives. More details are also provided in the supplementary material.
The conservative fields thus provide a flexible calculus for non-smooth differentiation that is applicable to many machine learning situations. In [16], a non-smooth implicit differentiation using the conservative Jacobians is developed. This can be seen as a form of automatic subdifferentiation (backpropagation). The automatic subdifferentiation is an automated application of the chain rule, made available through the use of the conservative fields. It amounts to calculating the conservative Jacobians of the underlying functions. This "conservative subgradient descent" is given by picking an initial value for the parameters, captured by a vector \(v_{0}\) followed by performing the following update in steps
\[\begin{split} v_{k+1}&=v_{k}+\alpha_{k}g_{k},\\ g_{k}&\in J(v_{k}),\end{split} \tag{13}\]
with \((\alpha_{k})_{k\in\mathbb{N}}\) a sequences of step-sizes and \(J(v_{k})\) the conservative Jacobian [16].
This gives a formal mathematical model for propagating derivatives which can be applied to guarantee local convergence of mini-batch stochastic gradient descent with backpropagation for a large number of machine learning problems. In particular, and of great importance for us, these results hold for locally Lipschitz tame functions.
Next, we will show that the subadditive dual of the MIP formulation of the BNN (7) is locally Lipschitz and tame. This will allow us to use the machinery of [15, 16] discussed above when training the BNN.
## 3 Main Result
We will now present the main result of the paper. To do so, we will restrict to a certain subset of conic MIPs, defined below, termed as "nice" conic MIPs, with a natural notion of "niceness" coming from the original problem (BNNs).
**Definition 8**.: _Let us consider a conic MIP (10). Under the following conditions:_
1. _the feasible set is compact,_
2. _the graph of the objective function has a finite number of non-differentiable points,_
_we call the conic MIP "nice"._
From the perspective of applications to BNNs these are natural assumptions. Firstly, the compactness is included in the condition for strong duality of the subadditive dual of the MIP. Secondly, the objective function of the MIP corresponds to the loss function of the BNN, which naturally is given by a finite union of smooth components.
**Proposition 1**.: _The conic MIP of Theorem 1 is nice._
Proof.: The feasible set is a product of \(\{0,1\}\) and a set \(S\). For any product of \(\{0,1\}\), we obtain a finite value within \(S\). The feasible set is hence compact. The objective function is a finite sum of loss functions for the original BNN, as such it has a finite number of non-differentiable points.
**Theorem 4**.: _Let us consider a nice conic MIP (10) as in Def. 8. For this MIP, there exists an equivalent reformulation which is definable in an o-minimal structure._
Proof.: Let us consider the subadditive dual (11) as the representation of the conic MIP (10). This dual is equivalent by Theorem 2. Furthermore, this dual is locally o-minimal by considering the no-trumps theorem (Theorem 3) together with the fact that \(f(x)\) is non-decreasing and subadditive. By [23, Remark 22], a compact subset of a locally o-minimal structure is o-minimal. When we assume that the feasible region of the mixed-integer set is compact (Property A), we thus obtain o-minimality.
**Corollary 1**.: _Training BNNs allow for implicit differentiation and chain rule._
Proof.: This follows from Proposition 1, Theorem 4, and from the work of [15, 16] discussed above, when one realizes that the subadditive dual is locally Lipschitz. Lipschitzity is from [44]: If \(f\in\mathbf{WNT}\) is subadditive and \(\inf_{t<0}f(tx)/t\) is finite for all \(x\), then \(f\) is Lipschitz.
This corollary thus provides us with a practical way of training the BNNs, by utilizing the results of [15, 16] to optimize over the subadditive dual of the corresponding MILP.
Let us finally note that, in general, non-decreasing subadditive functions are not tame. A counterexample is given by the Cantor staircase function [54]. This means that in general, the subadditive dual of a conic MIP need not fall under the tame setting and the nice property is necceseary for our main result.
## 4 An Example
To make the above discussion more clear, we present a simple example outlining how the training of a BNN could make use of the implicit differentiation of Bolte et al. [16]. To this end, we consider three final layers of a BNN inspired by Example 1 of [38], illustrated in Fig. 3, where there are a number of binary weights \(v_{i}\), \(w_{i}\) to be learned. The pen-ultimate two layers yield a bi-variate continuous-valued output layer \((y_{1},y_{2})\). Instead of the usual empirical risk, we consider an objective function involving weighted difference from values of the dependent variable in the training data (assumed to be zero), as well as one of the weights in the pen-ultimate layer, for the sake of a more interesting illustration:
\[\begin{split} z^{*}&:=\min_{x,y}\,2(y_{1}-0)+(y_{2}- 0)+\tfrac{1}{2}x_{1},\\ &\text{s.t.}\,x_{1}-\tfrac{3}{2}x_{2}+y_{1}-y_{2}=b,\\ &\qquad\sum_{i}v_{i}=x_{1}\\ &\qquad\sum_{i}w_{i}=x_{2}\\ &\qquad v_{i},w_{i}\in\{0,1\},\,x_{1},x_{2}\in\mathbb{Z}_{+},\,y _{1},y_{2}\in\mathbb{R}_{+}.\end{split} \tag{14}\]
We note that our theory does cover the case of the usual empirical risk with square loss function, but the illustrations would be more involved due to the non-linearity in the square loss.
Following the definition (11), the subadditive dual of (14) is:
\[\rho^{*}\coloneqq\max_{f\in\mathcal{F}_{\mathbb{R}_{+}}}f(b), \tag{15}\] \[\text{s.t.}\ f(1)\leq\tfrac{1}{2},\] \[\ f(-\tfrac{3}{2})\leq 0,\] \[\ f(1)\leq 2,\] \[\ f(-1)\leq 1,\] \[\ f(0)=0.\]
The subadditive dual problem is obviously an infinite-dimensional optimization problem over the whole space of subadditive functions \(\mathcal{F}_{\mathbb{R}_{+}}\). However, as shown in [55], the subadditive functions of MILPs are Chvatal functions, i.e., piece-wise linear. We can thus utilize this knowledge to enumerate the space of subadditive functions relevant for our problem by the slopes, breakpoints and number of segments of the piecewise linear function. When we consider nice MILPs as in (8), we thus obtain a finite-dimensional problem. It is furthermore evident that we can approximate this problem by truncating in the number of segments of the piecewise linear functions.
For the above example, we start with approximating \(f\) by a piecewise-linear function having two segments. By visual inspection of the behaviour of the value function \(f(b)\) (in solid lines) near the origin in Fig. 4, we see that we can approximate \(f(b)\) by
\[\tilde{f}(b)\coloneqq\begin{cases}&2b,\quad b>0,\\ &-b,\quad b\leq 0,\end{cases} \tag{16}\]
based on the directional derivatives. This crude approximation is shown in Fig. 4 as dashed lines. A conservative field for this function is given by
\[D_{\tilde{f}}(b)=\begin{cases}2,&b>0,\\ [-1,2],&b=0,\\ -1,&b<0.\end{cases} \tag{17}\]
Figure 3: A graphical representation of tree final layers of a BNN we use as an example. See (14) for the corresponding MILP.
It is now clear that we can use the conservative fields of [15, 16] to train over this approximation of the the piecewise linear subadditive dual of the primal problem (14).
More generally, we can introduce slope variables \(s_{1}\) and \(s_{2}\) as well as a breaking point \(p\), to parametrize the two-segment approximation:
\[\tilde{f}(b)\coloneqq\begin{cases}&s_{1}b,\quad b>p,\\ &s_{2}b,\quad b\leq p.\end{cases} \tag{18}\]
and thus find the best two-segment approximation of the piecewise linear subadditive dual of the primal problem (14), which in this case coincides with (16) above. Next, we can increase the precision of the approximation by introducing more and more segments of this approximating function, and optimize over the slopes and break points of the segments, and possibly also the number of segments. Following [56], we have studied formulations based on the optimal regression trees for piecewise regression. See the Supplementary material for details.
## 5 Conclusions and Limitations
We have introduced a link between binarized neural networks, and more broadly, nice conic MIPs, and tame geometry. This makes it possible to reuse pre-existing theory and practical implementations of automatic differentiation. Breaking new ground, we leave many questions open. The foremost question is related to the efficiency of algorithms for constructing the subadditive dual. Although Guzelsoy and Ralphs [38, Section 4, Constructing Dual Functions] survey seven very different algorithms, their computational complexity and relative merits are not well understood. For any of those, an efficient implementation (in the sense of output-sensitive algorithm) would provide
Figure 4: The value function, \(f(b)\), of the example (15) together with the simple approximation, \(\tilde{f}(b)\), given by (16).
a solid foundation for further empirical experiments. Given the immense number of problems in symbolic AI, which can be cast as MIPs, and the excellent scalability of existing frameworks based on automatic differentiation, the importance of these questions cannot be understated.
AcknowledgementThe research of Jakub Marecek and Johannes Aspman has been supported by European Union's Horizon Europe research and innovation programme under grant agreement No. GA 101070568 (Human-compatible AI with guarantees).
DisclaimerThis paper was prepared for information purposes and is not a product of HSBC Bank Plc. or its affiliates. Neither HSBC Bank Plc. nor any of its affiliates make any explicit or implied representation or warranty and none of them accept any liability in connection with this paper, including, but limited to, the completeness, accuracy, reliability of information contained herein and the potential legal, compliance, tax or accounting effects thereof. |
2302.08724 | Piecewise Deterministic Markov Processes for Bayesian Neural Networks | Inference on modern Bayesian Neural Networks (BNNs) often relies on a
variational inference treatment, imposing violated assumptions of independence
and the form of the posterior. Traditional MCMC approaches avoid these
assumptions at the cost of increased computation due to its incompatibility to
subsampling of the likelihood. New Piecewise Deterministic Markov Process
(PDMP) samplers permit subsampling, though introduce a model specific
inhomogenous Poisson Process (IPPs) which is difficult to sample from. This
work introduces a new generic and adaptive thinning scheme for sampling from
these IPPs, and demonstrates how this approach can accelerate the application
of PDMPs for inference in BNNs. Experimentation illustrates how inference with
these methods is computationally feasible, can improve predictive accuracy,
MCMC mixing performance, and provide informative uncertainty measurements when
compared against other approximate inference schemes. | Ethan Goan, Dimitri Perrin, Kerrie Mengersen, Clinton Fookes | 2023-02-17T06:38:16Z | http://arxiv.org/abs/2302.08724v2 | # Piecewise Deterministic Markov Processes for Bayesian Neural Networks
###### Abstract
Inference on modern Bayesian Neural Networks (BNNs) often relies on a variational inference treatment, imposing violated assumptions of independence and the form of the posterior. Traditional MCMC approaches avoid these assumptions at the cost of increased computation due to its incompatibility to subsampling of the likelihood. New Piecewise Deterministic Markov Process (PDMP) samplers permit subsampling, though introduce a model specific inhomogenous Poisson Process (IPPs) which is difficult to sample from. This work introduces a new generic and adaptive thinning scheme for sampling from these IPPs, and demonstrates how this approach can accelerate the application of PDMPs for inference in BNNs. Experimentation illustrates how inference with these methods is computationally feasible, can improve predictive accuracy, MCMC mixing performance and provide informative uncertainty measurements when compared against other approximate inference schemes. Code available here.
## 1 Introduction
Since Hamiltonian Monte Carlo (HMC) was first developed for Bayesian inference Neal (2012), sampling methods have seen relatively little application to Bayesian Neural Networks (BNNs). Flexibility, inference diagnostics and asymptotic guarantees of HMC comes at the cost of computational complexity as each data point needs to be passed to compute the entire likelihood for obtaining the gradient and performing the Metropolis Hastings correction. As models and data sets have grown, this expense has not been offset by the considerable performance increase in computational hardware. A recent study found that the fitting of a HMC model for ResNet20 required a computational cost equivalent to 60 million SGD epochs to obtain only 240 samples from three chains Izmailov et al. (2021).
To circumnavigate the computational expense, much research has explored the application of approximate inference through the lens of Variational Inference (VI) Jordan et al. (1999), Wainwright and Jordan (2008), Blei et al. (2017) or through exploiting properties of SGD Mandt et al. (2017). VI replaces the true target distribution with an approximate distribution that can be easily manipulated, typically using a mean-field approach where independence between parameters is assumed. These methods are attractive due to their reduced computational complexity and their amenability to stochastic optimisation. However, the suitability of these methods relies heavily on the expressiveness of the approximate posterior to accurately model the true distribution. Given the known correlations and frequent multi
Figure 1: Example of correlations between the parameters in the first layer of a BNN for a simple regression task. Plot (a) samples of predictive posterior from proposed method, (b) correlation between all parameters on the first layer, (c) kernel density estimate for a single parameter.
modal structure amongst parameters within BNNs Barber and Bishop (1998); MacKay (1995), a mean-field approximation can be unsuitable for accurate inference. Figure 1 illustrates these properties for a simple BNN. Stochastic gradient MCMC methods such as Stochastic Gradient Langevin Dynamics (SGLD) aim to address this issue, but requires prohibitavely small and decreasing learning rates to target the posterior that limits their applicabilityNagapetyan et al. (2017).
This work explores a new set of "exact" inference methods based on Piecewise Deterministic Markov Processss (PDMPs) Davis (1984) to perform Bayesian inference. PDMP methods are able to maintain the true posterior as it's invariant distribution during inference whilst permitting subsampling of the likelihood at each update. This property is attractive for BNNs which typically are of large dimension in terms of parameters and data sets. Furthermore, previous research has highlighted PDMP methods for favourable performance in terms of mixing and sampling efficiency Bouchard-Cote et al. (2018); Bierkens et al. (2019); Wu and Robert (2017); Bierkens et al. (2020). Dynamics of these samplers are simple to simulate, though simulating the times to update these dynamics is controlled by an Inhomogeneous Poisson Process (IPP) which can be difficult to sample. This work explores an adaptive procedure to approximate sampling of these event times to allow for approximate inference within the context of BNNs. The contributions of this paper are the following,
* We propose a novel adaptive thinning method for approximate sampling from IPPs events
* We develop a GPU accelerated package for applying these methods to general models
* We evaluate the performance for these methods applied to computer vision tasks using BNNs;
* Evaluate the suitablity of various PDMP samplers for BNNs and investigate how they can improve predictive accuracy, calibration and posterior exploration when compared against SGLD.
MCMC methods have often been seen as computationally prohibitive for models with many parameters or where modern large data sets are used. It is hoped that this work will demonstrate that approximate inference using MCMC approaches for BNNs can be both practically feasible and offer insightful results, and how we can build upon exact methods for approximate inference, and how we can close the bridge the gap to provide more accurate inference in BNNs.
## 2 Preliminaries
Following the description from Fearnhead et al. (2018), PDMP are defined by three key components: piecewise deterministic dynamics, an event rate and the transition kernel. For inference the goal is design these three key components such that we can use the properties of a PDMP to sample from the posterior distributions of our parameters \(\omega\). We represent the deterministic dynamics as \(\Psi(\omega,\mathbf{v},t)\), where \(\mathbf{v}\) is an auxiliary velocity variable to guide posterior exploration with known distribution \(\Phi(\mathbf{v})\) and \(t\) represents time. At random events, these dynamics are updated in accordance to a specified transition kernel. Upon an update event, the piecewise deterministic dynamics of the system update according to the kernel, and the state \(\omega\) at the time of the update event serves as the starting position for the next segment such that they are all connected.
An IPP with rate function \(\lambda(\omega(t),\mathbf{v}(t))\) governs the update times for the dynamics. All rate functions in this work rely upon the negative joint log probability of the model,
\[U(\omega)=-\log\Big{(}p(\omega)p(\mathcal{D}|\omega)\Big{)}, \tag{1}\]
where \(p(\omega)\) is a prior or reference measure and \(p(\mathcal{D}|\omega)\) is our likelihood, If these three components are suitably defined, it can be shown that we can use these processes to sample from a posterior distribution. For derivations on how to design these components to target a posterior distribution, the reader can to refer to Fearnhead et al. (2018); Vanetti et al. (2017); Davis (1993). We now introduce the samplers that used within this work.
### Bouncy Particle Sampler
The dynamics of the Bouncy Particle Sampler (BPS) Bouchard-Cote et al. (2018) are given by \(\Psi(\omega,\mathbf{v},t)=\omega^{i}+\mathbf{v}^{i}t\), where the superscripts indicate the segment of interest. The velocity remains constant within these segments and the parameter space is explored linearly. The velocity is updated at event times given by \(\tau\sim\text{IPP}(\lambda(\omega(t),\mathbf{v})\), where,
\[\lambda(\omega(t),\mathbf{v})=\max\{0,\nabla U(\omega)\cdot\mathbf{v}^{i}\}. \tag{2}\]
Once an event time is sampled, the state of our variable "bounces" according to a lossless inelastic Newtonian collision,
\[\mathbf{v}^{i+1}=\mathbf{v}^{i}-2\frac{\nabla U(\omega^{i+1})\cdot\mathbf{v} ^{i}}{\left\lVert\nabla U(\omega^{i+1})\right\rVert^{2}}\nabla U(\omega^{i+1}) \tag{3}\]
where \(\omega^{i+1}\) represents the end of the previous segment at time \(\tau\), and serves as the starting position for the following segment. The BPS provides linear dynamics that are simple to simulate, though relies only on local gradient information, which can lead to inefficient exploration for BNNs. Preconditioning can allow us to address this.
### Preconditioned BPS
To accelerate posterior exploration in directions of interest, we can precondition the gradients to include more
information about structure of our posterior space. Introduction of a preconditioning matrix \(A\) results in new dynamics of \(\Psi(\omega,\mathbf{v},t)=\omega^{i}+A\mathbf{v}^{i}t\), and a new event rate, \(\lambda(\omega(t),\mathbf{v})=\max\{0,\mathbf{v}\cdot A\nabla U(\omega+\mathbf{ v}t)\}\). Upon events, the velocity is updated according to,
\[\mathbf{v}^{i+1}=\mathbf{v}^{i}-2\frac{A\nabla U(\omega^{i+1})\cdot\mathbf{v} ^{i}}{\|A\nabla U(\omega^{i+1})\|^{2}}A\nabla U(\omega^{i+1}). \tag{4}\]
With careful choice of \(A\), exploration along certain axis can be appropriately scaled. Pakman et al. (2017) propose a preconditioner similar to Li et al. (2015), though our preliminary experimentation found inconsistent results when applied to BNNs. Instead we opt to build on the approach of Bertazzi and Bierkens (2020), where we use variance information of our samples to precondition our dynamics. We choose the preconditioner such that \(A=\text{diag}\big{(}\Sigma^{\frac{1}{2}}\big{)}\), where \(\Sigma\) is the estimated covariance in our sample found during a warmup period. As such, we refer to this sampler as the \(\sigma\text{BPS}\).
### Boomerang Sampler
The Boomerang Sampler Bierkens et al. (2020) introduces non-linear dynamics for both parameter and velocity terms, and the inclusion of a Gaussian reference measure for the parameters and velocity \(\mathcal{N}(\omega_{\star},\Sigma_{\star})\otimes\mathcal{N}(0,\Sigma_{\star})\). The first term in this reference measure can be seen as a replacement for the prior in the joint probability over parameters and the second as the known distribution for the velocity component. The parameters \(\omega_{\star}\) and \(\Sigma_{\star}\) can be specified as traditional prior, or can be specified in an empirical approach where they are learnt from the data. Within this work, we will set \(\omega_{\star}\) to the MAP estimate. In the original paper, \(\Sigma_{\star}\) is set to the inverse of the Hessian, however this is can be computationally prohibitive for BNNs. Instead, we sum over first order gradients at \(\omega_{\star}\), and then compute the derivative with respect of this sum that is then inverted and scaled such that,
\[\Sigma_{\star}=\gamma\Big{[}\sum_{i=0}^{N-1}\nabla_{\omega}\sum_{j=0}^{P-1} \nabla_{\omega}p(\mathcal{D}_{i}|\omega_{\star})_{j}\Big{]}^{-1} \tag{5}\]
where \(N\) is the number of mini-batches present, \(P\) is the number of parameters, subscript \(j\) indicates summation over parameter gradients in our model and \(\gamma\) is a hyperparameter to adjust the scale as needed. Unlike the BPS samplers, the velocity does not remain constant between events. The dynamics of the Boomerang sampler for the \(\omega\) and \(\mathbf{v}\) within events are given by \(\Psi(\omega,\mathbf{v},t)_{\omega}=\omega_{\star}-(\omega^{i}-\omega_{\star} )\cos(t)+\mathbf{v}^{i}\sin(t)\), \(\Psi(\omega,\mathbf{v},t)_{\mathbf{v}}=-(\omega^{i}-\omega_{\star})\sin(t)+ \mathbf{v}^{i}\cos(t)\), where the subscripts denote the parameter and velocity trajectory within the deterministic segment. The event rate is the same as the BPS, and the starting velocity for the next segment is updated upon events as,
\[\mathbf{v}^{i+1}=\mathbf{v}^{i}-2\frac{\nabla U(\omega^{i+1})\cdot\mathbf{v} ^{i}}{\left\|\Sigma_{\star}^{\frac{1}{2}}\nabla U(\omega^{i+1})\right\|^{2}} \Sigma_{\star}\nabla U(\omega^{i+1}). \tag{6}\]
### Velocity refreshment
All of the samplers introduced fail to target posterior explicitly when using the above dynamics alone. Introduction of a refreshment step rectifies this, which is governed by a homogeneous PP \(\tau_{ref}\sim\lambda(\lambda_{ref})\). When \(\tau_{ref}<\tau\), the velocity is instead randomly sampled from the known reference distribution \(\Phi(\mathbf{v})\), and \(\tau_{ref}\) is used for the update event time. For BPS samplers in this work we use a refreshment distribution of the form \(\mathcal{N}(0,\sigma^{2})\), where \(\sigma\) is a hyper-parameter to be set, and the Boomerang sampler requires \(\Phi(\mathbf{v})=\mathcal{N}(0,\Sigma_{\star})\). A summary of PDMP algorithms for inference is described in Supp. Material B.
### Problems with the event rate
With the deterministic dynamics illustrated in these samplers, the main challenge in implementation of these methods is due to the sampling of the event times. Analytic sampling from \(\text{IPP}\big{(}\lambda(t)\big{)}\) requires being able to invert the integral of the event rate w.r.t. time,
\[\Lambda(t)=\int_{0}^{\tau}\lambda(t)dt=\int_{0}^{\tau}\max\{0,\mathbf{v}\cdot A \nabla U(\omega(\mathbf{v},t)\}dt, \tag{7}\]
where \(A=\mathbf{I}\) for the BPS and Boomerang samplers. Inverting the above integral is feasible only for simple models. A general case for sampling from IPPs is available through thinning Lewis and Shedler (1979). This requires introducing an additional rate function that we can sample from \(\mu(t)\) that is also a strict upper bound on the event rate function of interest such that \(\mu(t)\geq\lambda(t)\) for all \(t\geq 0\).
The efficiency of any thinning scheme relies on the tightness of the upper bound; the greater the difference between the upper bound and the true rate, the more likely a proposed time will be rejected when sampling. Pakman et al. (2017) propose a Bayesian linear regression method to generate an upper bound suitable for thinning, though require the calculation of variance within gradients to formulate as a suitable upper bound. They calculate this variance empirically, which requires computing the gradient for each data point individually within a mini-batch. This computation prohibites use for BNNs where automatic differentiation software is used. Furthermore, the solution to the regression requires matrix inversion which can be numerically unstable without a strong prior, which limits it's application for accelerating sampling within larger models. In the next section, we address this issue by instead introducing an interpolation based scheme for creating efficient and
adaptive approximate upper bounds that avoids excessive gradient computations and the numeric instability of matrix inversion.
## 3 Adaptive bounds for samplers
### Sampling from ipps with linear event rates
Our goal is to create a piecewise-linear envelope \(h(t)\) that will serve as an approximate upper bound of our true event rate, where each segment in \(h(t)\) is represented by \(a_{i}t+b_{i}\). This envelope will serve as the event rate for a proposal IPP that will be suitable for use with the thinning method of Lewis and Shedler (1979). Acceptance of an event time \(t\) is given by,
\[U\leq\frac{\lambda(t)}{h(t)}, \tag{8}\]
where \(U\sim\text{Uniform}[0,1]\). We begin by building on the work of Klein and Roberts (1984) to demonstrate how to sample times from an IPP with a piecewise-linear event rate which we can use with thinning.
Within our proposal IPP with rate \(h(t)\), we wish to generate the next event time \(t_{i}\) given the previous event \(t_{i-1}\). The probability of an arrival or event occuring within the range of \([t_{i-1},t_{i}]\) is given by Devroye (2006),
\[F(x)=1-\text{exp}\{-(\Lambda(t_{i})-\Lambda(t_{i-1}))\} \tag{9}\]
We can solve this expression for \(t_{i}\) by,
\[t_{i}=\Lambda^{-1}(t_{i-1}-U) \tag{10}\]
where \(U\sim\text{Uniform}[0,1]\). For linear segments, the solution to this system can be written as Klein and Roberts (1984),
\[t_{i}=\big{(}-b_{i}+\sqrt{b_{i}^{2}+a_{i}^{2}t_{i-1}^{2}+2a_{i}b_{i}t_{i-1} \log(1-U)}\big{)}/a_{i}. \tag{11}\]
This provides a framework for sampling from ipps with a linear event rate. We now describe how we create a piecewise-linear envelope for a proposal process that can be used for thinning.
### Piecewise interpolation for event thinning
We begin by introducing a modified event rate for which we will form our envelope,
\[\hat{\lambda}(\omega(t),\mathbf{v})=\max\{0,\alpha\nabla U(\omega)\cdot \mathbf{v}^{i}\}, \tag{12}\]
where \(\alpha\geq 1\) is a positive scaling factor to control the tightness of the approximate bound on the rate. The use of \(\hat{\lambda}\) for creating our envelope is valid, since for values of \(\alpha\geq 1\), \(\hat{\lambda}(t)\geq\lambda(t)\). The scaling factor included in this event rate is designed to provide flexibility to end users with respect to computational time and bias that will be introduced during inference. The closer \(\alpha\) is to one, the lower the probability for rejection of proposed event times, but the greater the probability that the generated event rate will not be a strict upper bound.
Our goal is to create a piecewise-linear upper bound suitable for proposing event times using Equation 11. To achieve this we have two growing sets, one for proposed event times \(T=\{t_{0},...,t_{n}\}\) and the value of the adjusted event rates at these times \(L=\{\hat{\lambda}(t_{0}),\ldots,\hat{\lambda}(t_{n})\}\) for which we can create a set of functions,
\[h(t)=a_{i}t+b,\hskip 14.226378ptt\geq t_{i}. \tag{13}\]
The values for \(a_{i}\) and \(b_{i}\) are found by interpolating between the points \((t_{i-1},\hat{\lambda}(t_{i-1}))\) and \((t_{i},\hat{\lambda}(t_{i}))\).
At the beginning of every deterministic PDMP segment, the sets \(T\) and \(L\) will be empty. To initialise the sets and create
Figure 2: Example of the progression of the proposed envelope scheme used for thinning. The blue line represents the true event rate, orange section depicts the active regions for which we sample a new proposal time, and the red section depicts previous segments in the envelope. Starting from the left, an initial segment is found through interpolating between time points \(t_{0}\) and \(t_{init}\). In the next segment, active region of the envelope is found through interpolating between the two prior points, which extends to create a new segment to propose times. This process continues until a proposed time is accepted from thinning.
our first linear segment, we evaluate the event rate at two points, \(t_{0}=0\) and \(t=t_{init}\), where \(t_{init}>t_{0}\). To evaluate the values for \(a_{0}\) and \(b_{0}\), we interpolate between these two segments. Once the values for the first linear segment are found, \(t_{0}\) and \(\hat{\lambda}(t_{0})\) are appended to their corresponding sets, and \(t_{init}\) and \(\hat{\lambda}(t_{init})\) are discarded. With this initial linear segment, we can propose a time \(t_{i}\) through Equation 11. This proposed time is either accepted or rejected from Equation 8.
If the proposed time is accepted, then the dynamics of the PDMP sampler are updated at the given event time and the sets \(T\) and \(L\) are cleared, ready to be reinitialised for the new dynamics. If the time is rejected, the proposed time \(t_{i}\) and envelope evaluation \(\hat{\lambda}(t_{i})\) are appended to their respective sets, and a new linear segment is calculated to interpolate between this rejected proposal and the previous elements in the sets \(T\) and \(L\). The rejected proposal time will serve as the new starting point \((t_{i-1})\) for the new linear segment to propose the next time using Equation 11. This will continue until proposed time is accepted. This process depicted visually in Figure 2 and summarised in Supp. Material A.
Within this work, we limit ourselves to models where the envelope provided by \(h(t)\) will only be an approximate upper bound, meaning bias will likely be introduced during inference. Diagnosis and correction of this can be identified through the acceptance ratio \(\lambda(t)/h(t)\); if this value is greater than one, the condition of \(h(t)\) being a local upper bound is violated. The amount of potential bias introduced can be mitigated by increasing the scaling factor \(\alpha\) in Equation 12 at the expense of increasing computation load. This property is investigated in Supp. Material C. In the following sections we evaluate the proposed event thinning scheme for BNNs to identify the suitability of different samplers for inference in these challenging models, and how they can outperform other stochastic approximation methods in terms of calibration, posterior exploration, sampling efficiency and predictive performance.
## 4 Related Work
As discussed, the BPS requires incorporation of a reference Poisson process to sample from the posterior. The Generalised BPS Wu and Robert (2017) is an updated variant of this algorithm that incorporates a stochastic update of the velocity which alleviates the need for a refreshment process. Simulations have shown comparable performance to the BPS for simple models, and can reduce the need for fine-tuning the reference parameter \(\tau_{ref}\).
Another prominent sampler is the Zig-Zag Process (ZZP) (Bierkens et al., 2019), where at events the dynamics of a single parameter are updated. For the one dimensional case, this sampler represents that same process as the BPS. This sampler has shown favourable results in terms of mixing performance and can achieve ergodicity for certain models where the BPS cannot. A key characteristic of this method is that each parameter is assigned an individual event rate, making implementation for high-dimensional BNN models challenging.
Another class of models that designed for subsampling are discrete stochastic MCMC methods Wenzel et al. (2020); Chen et al. (2014); Ma et al. (2015); Welling and Teh (2011); Li et al. (2015). These models have shown favourable performance, with a recent variant achieving comparable predictive accuracy on the ImageNet data set Heek and Kalchbrenner (2019). Compared to algorithms related to PDMPs, it has been shown that high variance related to naive subsampling limits these methods to provide only an approximation to the posterior Betancourt (2015). The bias that is introduced due to subsampling can be controlled by reducing the step-size for these methods at the expense of mixing performance and posterior explorationNagapetyan et al. (2017); Brosse et al. (2018); Teh et al. (2016). We investigate the effect of this properties for SGLD and compare performance with PDMP samplers in the following section.
## 5 Experiments
We now validate the performance of PDMPs using the proposed event sampling method on a number of synthetic and real-world data sets for regression and classification. To analyse performance for predictive tasks, the predictive posterior needs to be evaluated. In this work, we discretise samples from the trajectory to allow for Monte Carlo integration,
\[p(y^{*}|x^{*},\mathcal{D}) =\int\pi(\omega)p(y^{*}|\omega,x^{*})d\omega\] \[\approx\frac{1}{N}\sum_{i=1}^{N}p(y^{*}|\omega_{i},x^{*})\ \ \ \ \omega_{i}\sim\pi(\omega). \tag{14}\]
For our experiments, we discretise the trajectory of our posterior dynamics to the parameter values encountered at event times. Experimentation is first conducted on multiple small data sets to allow us to easily visualise predictive performance and uncertainty in our models, followed by more difficult classification tasks with Bayesian Convolutional Neural Networks (CNNs) on real data sets. For all experimentation, we set our scaling factor from Equation 12 to \(\alpha=1.0\) to promote computational efficiency. To enable these experiments, we deliver a Python package titled Tensorflow PDMP (TPDMP). This package utilises the Tensorflow Probability library Dillon et al. (2017), allowing for hardware acceleration and graph construction of all our models to accelerate computation. We deliver kernels to implement the BPS, \(\sigma\)BPS and Boomerang sampler with our proposed event thinning scheme.
### Regression and Binary Classification with BNNs
To visualise predictive performance and uncertainty estimation, regression and binary classification tasks were formed on synthetic data sets. Description of the networks used for these tasks is described in Supp. Material G. Before sampling, a MAP estimate was first found using stochastic optimisation and was used to initialise each sampler. 2,000 samples were generated using each sampling method, with each sampler initialised from the same MAP estimate. The \(\sigma\)BPS requires an additional warmup period to identify suitable values for the preconditioner. We achieve this by performing 1,000 initial samples using the BPS, and standard deviation parameters used for the preconditioner are estimated from these samples using the Welford algorithm Welford (1962). These preconditioner values are then fixed throuhgout the sampling process. For the Boomerang Sampler, the preconditioner scaling factor from Equation 5 is set to \(\gamma=500.0\). The PDMP methods are compared against SGLD which starts with a learning rate that decays to zero as required Welling and Teh (2011); Nagapetyan et al. (2017), and with no decay of the learning rate as is commonly done in practice (SGLD-ND). Examples of the predictive posterior distribution for regression and binary classification is shown in Figures 3 and 4 respectively, with full analysis in Supp. Material D. All PDMP models are fit with the proposed adaptive event thinning procedure.
Results from these experiments affirm the inference from the PDMP models is suitable for predictive reasoning, with low variance seen within the range of observed data and greater variance as distance from observed samples increases or increased uncertainty along decision boundaries. This is in contrast to SGLD, which is unable to offer suitable predictive uncertainty, even in the case for larger non-decreasing learning rates. This highlights the known limitations of SGLD, that with a decaying learning rate it can fail to explore the posterior, and with a larger non-decreasing learning rate will converge to dynamics offered by traditional SGD Brosse et al. (2018); Nagapetyan et al. (2017).
These tests indicate promising performance in terms of predictive accuracy and uncertainty estimates. To further demonstrate classification performance, we move to larger and more complicated models for performing classification on real world data sets.
### Multi-Class Classification
We now evaluate the performance of the proposed sampling procedures on the popular MNIST LeCun et al. (1998); Fashion MNIST Xiao et al. (2017); SVHN Netzer et al. (2011), CIFAR-10 and CIFAR-100 Krizhevsky and Hinton (2009) data sets using CNNs. For MNIST and Fashion-MNIST, the LeNet5 architecture was used whilst for SVHN, CIFAR-10, and CIFAR-100 the modified ResNet20 architecture from Wenzel et al. (2020) was used. Each parameter was again supplied a standard normal prior.
Similar to the experiments on regression, a MAP estimate is found and used to initialise the samplers with each sampler initialised from the same MAP estimate. 2000 samples for each model are then generated, though thinning factor of 10 is used reduce the number of returned samples used for
Figure 4: Examples of the predictive mean and variance for synthetic classification task. Left column illustrates results using the BPS and the right using SGLD. We see increased uncertainty for the BPS around the decision boundary, whilst SGLD shows greater certainty.
Figure 3: Examples of the different PDMP samplers using the proposed event thinning procedure on synthetic regression task compared against SGLD with decaying learning rate and constant learning rate (SGLD-ND).
prediction to 200. For these models we measure models predictive performance and calibration through the Negative Log-Likelihood (NLL) and Expected Calibration Error (ECE) Guo et al. (2017), and evaluate sampling efficiency through investigating Effective Sample Size (ESS) in our samples Robert et al. (1999). Due to the high dimension of our models, we perform PCA on returned samples and project them onto first principal component to report ESS on the direction of greatest variance, or most exploration of the posterior. In Supp. Material G.2 we report ESS for other principal components. In these experiments, we include SGLD with No Decay of the learning rate (SGLD-ND) for comparison alongside original SGLD. A full description of the models used and experiment parameters is shown in Supp. Material G, and is summarised in Table 1.
These results highlight favourable performance for certain samplers. The BPS sampler is unable to provide calibrated predictions, however we see the Boomerang sampler is able to consistently outperform other samplers in terms of calibration, NLL and importantly, effective sample size, whilst also promoting competitive or improved predictive accuracy. This highlights the potential for Boomerang sampler for probabilistic inference within neural networks.
With measures of predictive performance and ESS within our models, we wish to further investigate the mixing properties of the samplers presented within to identify how well the posterior space is being explored. ESS only gives a measure to approximate the number of independent samples within our MCMC chain, however we are also interested in how well the support for the posterior is being explored. Given the large number of parameters seen within a BNN, it is infeasible to evaluate the coordinate trace and autocorrelation plots for individual parameters as is typically done for MCMC models. To address this, we perform PCA to reduce the dimension of our data and investigate the trace plots of the first principle component as illustrated in Figure. 5. From these figures we can identify strong correlation between samples from the BPS, \(\sigma\)BPS and SGLD-ND solu
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Dataset** & **Inference** & **ACC** & **NLL** & **ECC** & **ESS** \\ \hline \hline \multirow{4}{*}{MNIST} & SGD & 0.9920 & 0.0240 & **0.1259** & - \\ & SGLD & 0.9908 & 0.0903 & 6.0931 & 19.8891 \\ & SGLD-ND & **0.9925** & 0.02409 & 0.2409 & 2.8730 \\ & BPS & 0.9905 & 1.0103 & 61.9818 & 2.6426 \\ & \(\sigma\)BPS & 0.9895 & 0.9845 & 1.0445 & 2.7544 \\ & Boomerang & 0.9920 & **0.02336** & 0.18157 & **200.0** \\ \hline \multirow{4}{*}{\begin{tabular}{l} Fashion- \\ MNIST \\ \end{tabular} } & SGD & 0.9908 & 0.3329 & 4.5054 & - \\ & SGLD & **0.9106** & 0.3042 & 5.5169 & 19.8457 \\ & SGLD-ND & 0.9100 & 0.3093 & 3.9974 & 2.8729 \\ & BPS & 0.9078 & 0.4102 & 14.6252 & 2.7148 \\ & \(\sigma\)BPS & 0.9088 & 0.3283 & 4.4399 & 2.6489 \\ & Boomerang & 0.9102 & **0.3024** & **3.6173** & **200.0** \\ \hline \multirow{4}{*}{SVHN} & SGD & 0.9456 & 0.1989 & 0.7776 & - \\ & SGLD & **0.9510** & 0.1899 & 1.9623 & 19.8271 \\ & SGLD-ND & 0.9464 & 0.1962 & 0.5482 & 2.8730 \\ & BPS & 0.9466 & 0.5557 & 32.3369 & 2.6943 \\ & \(\sigma\)BPS & 0.9466 & 0.1915 & **0.4147** & 2.7437 \\ & Boomerang & 0.9437 & **0.1836** & 0.4516 & **175.9573** \\ \hline \multirow{4}{*}{\begin{tabular}{l} CIFAR \\ 10 \\ \end{tabular} } & SGD & 0.8057 & 0.9916 & 115.1181 & - \\ & SGLD & 0.8058 & 0.8483 & 13.3109 & 19.888 \\ & SGLD-ND & 0.8057 & 0.9730 & 14.9595 & 2.8730 \\ & BPS & 0.7919 & 1.1050 & 39.0526 & 2.6772 \\ & \(\sigma\)BPS & 0.8002 & 0.7903 & 11.2126 & 2.6624 \\ & Boomerang & **0.8074** & **0.6980** & **8.0565** & **200.0** \\ \hline \multirow{4}{*}{
\begin{tabular}{l} CIFAR \\ 100 \\ \end{tabular} } & SGD & 0.6384 & 1.4516 & 12.3612 & - \\ & SGLD & 0.6369 & 1.4513 & 12.3800 & 20.2671 \\ \cline{1-1} & SGLD-ND & 0.6372 & 1.4512 & 12.4773 & 2.8123 \\ \cline{1-1} & BPS & 0.5911 & 2.2989 & 41.1297 & 2.7135 \\ \cline{1-1} & \(\sigma\)BPS & 0.6394 & 1.3832 & 7.6427 & 2.7288 \\ \cline{1-1} & Boomerang & **0.6425** & **1.3262** & **4.9786** & **163.5605** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of predictive performance using PDMP samplers with the proposed event time sampling methods. Negative log-likelihood (NLL) is reported, along with calibration measured using the expected calibration error (ECE) Guo et al. (2017). Effective sample size (ESS) is measured over the first principle component of samples.
Figure 5: Example of ACF and trace plots for first principle component of the samples from network fit on SVHN dataset.
Figure 6: Examples from predictive posterior for difficult to classify samples from SVHN. Top row shows the original image and the bottom row shows the predictive distribution for the Boomerang sampler and SGLD. The mean for each class represented by the dot, and the 95% credible intervals shown with the error bars.
tions. SGLD offers reduced correlation in samples, however as seen in the trace plot, samples fail to explore the posterior and instead converge to a steady state, whilst the Boomerang sampler provides considerably reduced correlation and more favourable mixing. Convergence of the SGLD samples can be attributed to the reduction in learning rate required to target the posterior. We verify this result in Supp. Material E, where we provide further analysis into results from all networks and remaining principal components. The effect of this convergence in terms of predictive uncertainty is illustrated within Figure 6, where the PDMP sampler is able to provide more meaningful uncertainty estimates for difficult to classify samples, and the SGLD predictive results converge to that similar of a point estimate. Additional examples of the predictive distributions is shown in Supp. Material J.
Probabilistic methods have shown favourable performance in terms of Out of Distribution (OOD) detection Grathwohl et al. (2019); Maddox et al. (2019). Given the point estimate like nature of the results returned by SGLD, we wish to compare with results from the Boomerang sampler to see if both can offer similar performance for OOD data. We see in Figure 7 that the Boomerang sampler offers greater entropy for OOD data, indicating a desireable increase in the models aleatoric uncertainty. Additional analysis is provided in Supp. Material I. Given the consistent predictive performance, quality of uncertainty estimates and posterior exploration, we would recommend researchers wishing to apply MCMC methods for BNNs consider the use the Boomerang sampler.
## 6 Discussion and Limitations
Whilst the PDMP methods have shown favourable performance for predictive accuracy, calibration and uncertainty in BNNs, there are certain challenges with fitting them. The PDMP samplers used within this work are designed to target the joint distribution,
\[p(\omega,\mathbf{v})=\pi(\omega)\Phi(\mathbf{v}) \tag{15}\]
where \(\pi(\omega)\) is the target posterior and \(\Phi(\mathbf{v})\) is the distribution of the auxiliary velocity components which needs to be set by the users in the form of the refreshment distribution. For the BPS and \(\sigma\)BPS samplers, it has been shown that with a reference distribution may be a Gaussian or restricted to the unit hypersphere Bouchard-Cote et al. (2018). For the Boomerang sampler, the velocity distribution is designed with respect to a reference measure to ensure invariance to the target distribution, such that \(\Phi(\mathbf{v})=\mathcal{N}(0,\Sigma_{\star})\), where \(\Sigma_{\star}\) is the same factor used to precondition the dynamics. The choice in distribution used for the velocity component has an explicit effect on the mixing capabilities of the models when applied to BNNs. We demonstrate this effect in Supp. Material F.1. We find that with a velocity distribution with too much variance can cause the effects similar to that of divergences seen in HMC and NUTS. Furthermore we see that with variance set too low, the samplers can fail to explore the posterior sufficiently to provide the desired meaningful uncertainty estimates. A similar effect can be seen for the choice of refreshment rate, which we investigate in Supp. Material F.2. We highlight these limitations as areas for future research to enable robust application of PDMP methods for BNNs.
The Boomerang sampler as implemented within this work and the original paper are probabilistic, though are not purely Bayesian. This is due to the reference measure for the velocity to guide dynamics within the system have been identified through the data itself. A strictly Bayesian approach can be recovered by setting the reference measure and associated preconditioner matrix from a prior distribution, though we would lose some of the favourable performance offered by this sampler. We can view the approach implemented within similar to an emperical Bayes, where we are gleaning information about the prior (reference measure for the Boomerang sampler), from the data itself. Given the difficulty of specifying a meaningful and informative prior, and the success seen when using emperical priors for BNNs Krishnan et al. (2020), we believe the use of such an approach for the Boomerang sampler is justified.
## 7 Conclusion
Within this work we demonstrate how PDMPs can be used for BNNs. We provide a flexible piecewise linear bound to enable sampling of event times within these frameworks that permits inference inf BNNs. A GPU accelerated software package is offered to increase the availability of PDMPs for a wide array of models. Experimentation on various BNNs for regression and classification indicates comparable or improved predictive performance, and considerable improvement in model calibration, uncertainty estimation and mixing when compared against existing stochastic inference methods.
Figure 7: Entropy histograms comparing SGLD and Boomerang sampler fit on the CIFAR-10 dataset. OOD data represented by SVHN. We see the predictive entropy form the Boomerang sampler increases as desired for OOD data, whilst SGLD remains overly confident for erroneous samples. |
2306.14375 | Interpretable Sparsification of Brain Graphs: Better Practices and
Effective Designs for Graph Neural Networks | Brain graphs, which model the structural and functional relationships between
brain regions, are crucial in neuroscientific and clinical applications
involving graph classification. However, dense brain graphs pose computational
challenges including high runtime and memory usage and limited
interpretability. In this paper, we investigate effective designs in Graph
Neural Networks (GNNs) to sparsify brain graphs by eliminating noisy edges.
While prior works remove noisy edges based on explainability or task-irrelevant
properties, their effectiveness in enhancing performance with sparsified graphs
is not guaranteed. Moreover, existing approaches often overlook collective edge
removal across multiple graphs.
To address these issues, we introduce an iterative framework to analyze
different sparsification models. Our findings are as follows: (i) methods
prioritizing interpretability may not be suitable for graph sparsification as
they can degrade GNNs' performance in graph classification tasks; (ii)
simultaneously learning edge selection with GNN training is more beneficial
than post-training; (iii) a shared edge selection across graphs outperforms
separate selection for each graph; and (iv) task-relevant gradient information
aids in edge selection. Based on these insights, we propose a new model,
Interpretable Graph Sparsification (IGS), which enhances graph classification
performance by up to 5.1% with 55.0% fewer edges. The retained edges identified
by IGS provide neuroscientific interpretations and are supported by
well-established literature. | Gaotang Li, Marlena Duda, Xiang Zhang, Danai Koutra, Yujun Yan | 2023-06-26T01:37:10Z | http://arxiv.org/abs/2306.14375v1 | # Interpretable Sparsification of Brain Graphs:
###### Abstract.
Brain graphs, which model the structural and functional relationships between brain regions, are crucial in neuroscientific and clinical applications involving graph classification. However, dense brain graphs pose computational challenges including high runtime and memory usage and limited interpretability. In this paper, we investigate effective designs in Graph Neural Networks (GNNs) to sparsify brain graphs by eliminating noisy edges. While prior works remove noisy edges based on explainability or task-irrelevant properties, their effectiveness in enhancing performance with sparsified graphs is not guaranteed. Moreover, existing approaches often overlook collective edge removal across multiple graphs.
To address these issues, we introduce an iterative framework to analyze different sparsification models. Our findings are as follows: (i) methods prioritizing interpretability may not be suitable for graph sparsification as they can degrade GNNs' performance in graph classification tasks; (ii) simultaneously learning edge selection with GNN training is more beneficial than post-training; (iii) a shared edge selection across graphs outperforms separate selection for each graph; and (iv) task-relevant gradient information aids in edge selection. Based on these insights, we propose a new model, Interpretable Graph Sparsification (IGS), which enhances graph classification performance by up to 5.1% with 55.0% fewer edges. The retained edges identified by IGS provide neuroscientific interpretations and are supported by well-established literature.
Graph Neural Networks; Interpretability; Graph Sparsification +
Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=*] †: [leftmargin=*]Footnote †: [leftmargin=
Prior work related to graph sparsification generally falls into two categories. The first line of work learns the relative importance of the edges, which can be used to remove unimportant edges in the graph sparsification process. These works usually focus on interpretability explicitly, oftentimes referred to as "explainable graph neural networks (explainable GNNs)" (Krizhevsky et al., 2014). The core idea embraced by this community is to identify small subgraphs that are most accountable for model predictions. The relevance of the edges to the final predictions is encoded into an edge importance mask, a matrix that reveals the relative importance of the edges and can be used to sparsify the graphs. These works show good interpretability under various measures (Krizhevsky et al., 2014). However, it remains unclear whether better interpretability indicates better performance. The other line of work tackles unsupervised graph sparsification (Krizhevsky et al., 2014), without employing any label information. Some methods reduce the number of edges by approximating pairwise distances (Krizhevsky et al., 2014), cuts (Krizhevsky et al., 2015), or eigenvalues (Krizhevsky et al., 2016). These task-irrelevant methods may discard useful task-specific edges for predictions. Fewer works are task-relevant, primarily focusing on node classification (Krizhevsky et al., 2014; Krizhevsky et al., 2015). Consequently, these works produce different edge importance masks for each graph. However, in graph classification, individual masks can lead to significantly longer training time and susceptibility to noise. Conversely, a joint mask emerges as the preferred choice, offering robustness against noise and greater interpretability.
**This work.** To assess the quality of the sparsified graphs obtained from interpretable models in the graph classification task, we propose to evaluate the effectiveness of the sparsification algorithms under an iterative framework. At each iteration, the sparsification algorithms decide which edges to remove and feed the sparsified graphs to the next iteration. We measure the effectiveness of a sparsification algorithm by computing the accuracy of the downstream graph classification task at each iteration. An effective sparsification algorithm should acquire the ability to identify and remove noisy edges, resulting in a performance boost in the graph classification task after several iterations (Section 4.2).
We utilize this iterative framework to evaluate two common practices used in graph sparsification and graph explainability: (1) obtaining the edge importance mask from a trained model and (2) learning an edge importance mask for each graph individually (Krizhevsky et al., 2014). For instance, GNNExplainer (Krizhevsky et al., 2014) learns a separate edge importance mask for each graph **after** the model is trained. Through our empirical analysis, we find that these practices are **not helpful** in graph sparsification, as the sparsified graphs may lead to lower classification accuracy. In contrast, we identify three key strategies that can improve the performance. Specifically, we find that **(S1)** learning a **joint** edge importance mask **(S2) simultaneously with the training** of the model helps improve the performance over the iterations, as it passes task-relevant information through back-propagation. Another strategy to incorporate the task-relevant information is to **(S3)** initialize the mask with the gradient information from the immediate previous iteration. This strategy is inspired by the evidence in the computer vision domain that gradient information may encode data and task-relevant information and may contribute to the explainability of the model (Beng et al., 2016; Chen et al., 2017; Chen et al., 2017).
Based on the identified strategies, we propose a new **Interpretable** model for brain **G**raph **S**parsification, IGS. We evaluate our IGS model on real-world brain graphs under the iterative framework and find that it can benefit from iterative sparsification. IGS achieves up to 5.1% improvement on graph classification tasks with graphs of 55.0% fewer edges than the original compared to strong baselines.
Our main contributions are summarized as follows:
* **General framework.** We propose a general iterative framework to analyze the effectiveness of different graph sparsification models. We find that edge importance masks generated from interpretable models may not be suitable for graph sparsification because they may not improve the performance of graph classification tasks.
* **New insights.** We find that two practices commonly used in graph sparsification and graph explainability are not helpful under the iterative framework. Instead, we find that learning a joint edge importance mask along with the training of the model improves the classification performance during iterative graph sparsification. Furthermore, incorporating gradient information in mask learning also boosts the performance in iterative sparsification.
* **Effective model.** Based on the insights, we propose a new model, IGS, which can improve the performance (up to 5.1%) with significantly sparser graphs (up to 55.0% less edges).
* **Interpretability.** Our IGS model learns to remove task-irrelevant edges in the iterative process. The edges that are retained by IGS have neuroscientific interpretations and are well supported by well-established literature.
## 2. Notation and preliminaries
In this section, we introduce key notations, provide a brief background on GNNs, and formally define the problem that we investigate.
**Notations.** We consider a set of graphs \(\mathcal{G}\). Each graph \(G_{i}(\mathcal{V},\mathcal{E}_{i})\in\mathcal{G}\) in this set has \(n\) nodes, and the corresponding node set and edge set are denoted as \(\mathcal{V}\) and \(\mathcal{E}_{i}\), respectively. The graphs share the same set of nodes. The set of neighboring nodes of node \(v\) is denoted as \(\mathcal{N}_{v}\). We focus on the setting where the input graphs are weighted, and we represent the weighted adjacency matrix of each input graph \(G_{i}\) as \(\mathbf{A}_{i}\in\mathbb{R}^{n\times n}\). The node features in \(G_{i}\) are represented by a matrix \(\mathbf{X}_{i}\in\mathbb{R}^{n\times d}\), where its \(j\)-th row \(\mathbf{X}_{i}[j,:]\) represents the features of the \(j\)-th node, and \(d\) refers to the dimensionality of the node features. For conciseness, we use \(\mathbf{X}_{i}^{(I)}\) to represent the node representations/output at the \(l\)-th layer of a GNN. Given our emphasis on graph classification problems, we denote the number of classes as \(k\), the set of labels as \(\mathcal{Y}\), and associate each graph \(G_{i}\) with a corresponding label \(y_{i}\in\mathcal{Y}\).
We also leverage gradient information (Zhu et al., 2017) in this work: \(\nabla f_{j}(G_{i})\) denotes the gradients of the output in class \(j\) with respect to the
\begin{table}
\begin{tabular}{c c c} \hline \hline & PicVocab & ReadEng \\ \hline Original & 52.743.37 & 55.433.31 \\ Direct thresholding & 52.035.31 & 54.833.19 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Brain graph classification performance (accuracy) on the original graphs (Original) and sparsified graphs (Direct thresholding). Direct thresholding may keep unimportant edges. Details about the data and experimental setup can be found in Section 4.1.
input graph \(G_{i}\). These gradients are obtained through backpropagation and are referred to as the gradient map.
**Supervised Graph Classification.** Given a set of graphs \(\{G_{1},G_{2},\cdots,G_{t}\}\) and their labels \(\{y_{1},y_{2},\cdots,y_{t}\}\) for training, we aim to learn a function \(f\colon\mathcal{G}\xrightarrow{}\mathcal{J}\), such that the loss \(\mathbb{E}(\mathcal{L}(y_{i},\hat{y_{i}}))\) is minimized, where \(\mathbb{E}\) denotes expectation, \(\mathcal{L}\) denotes a loss function, and \(\hat{y_{i}}=f(G_{i})\) denotes the predicted label of \(G_{i}\).
**GNNs for Graph Classification.** An \(L\)-layer GNN model ((6; 69; 70; 67; 71; 60)) often follows the message-passing framework, which consists of three components ((21)): (1) neighborhood propagation and aggregation: \(\mathbf{m}_{i}^{(l)}=\textsc{AGREGATE}(\mathbf{x}_{i}^{(l)}[u_{:};],u\in N_{o})\); (2) combination: \(\mathbf{X}_{i}^{(l+1)}[u_{:};]=\textsc{COMBINE}(\mathbf{x}_{i}^{(l)}[u_{:};], \mathbf{m}_{o}^{(l)})\), where AGREGATE and COMBINE are learnable functions; (3) global pooling. \(\mathbf{x}^{G_{i}}=\textsc{Pooling}(\mathbf{X}_{i}^{(L)})\), where the Pooling function operates on all node representations, including options like Global_mean, Global_max or other complex pooling functions ((73; 73)). The loss is given by \(L=\frac{1}{N^{\mathcal{O}}}\sum_{G_{i}\in g_{\text{max}}}\textsc{CrossEntropy }(\textsc{Softmax}(\mathbf{x}^{G_{i}}),y_{i})\), where \(\mathcal{G}_{\text{train}}\) represents the set of training graphs and \(N^{G}=|\mathcal{G}_{\text{train}}|\). Though our framework does not rely on specific GNNs, we illustrate the effectiveness of our framework using the GCN model proposed in ((35)).
The performance of GNN models heavily depends on the quality of the input graphs. Messages propagated through noisy edges can significantly affect the quality of the learned representations ((70)). Inspired by this observation, we focus on the following problem:
**Problem: Interpretable, Task-relevant Graph Sparsification.**
Given a set of input graphs \(\mathcal{G}=\{G_{1},G_{2},\cdots,G_{t}\}\) and the corresponding labels \(\mathcal{J}=\{y_{1},y_{2},\cdots,y_{t}\}\), we seek to learn a set of graph-specific edge importance masks \(\{\mathcal{M}_{1},\mathcal{M}_{2},\cdots,\mathcal{M}_{t}\}\in\{0,1\}^{n\times n}\), **OR** a joint edge importance mask \(\mathcal{M}\in\{0,1\}^{n\times n}\) shared by all graphs, which can be used to remove the noisy edges and retain the most task-relevant ones. This should lead to enhanced classification performance on sparsified graphs. Edge masks that effectively identify task-relevant edges are considered to be interpretable.
## 3. Proposed Method: Igs
In this section, we introduce our proposed iterative framework for evaluating various sparsification methods. Furthermore, we introduce IGS, a novel and interpretable graph sparsification approach that incorporates three key strategies: (S1) joint mask learning, (S2) simultaneous learning with the GNN model, and (S3) utilization of gradient information. We provide detailed explanations of these strategies in the following subsections.
### Iterative Framework
Figure 1 illustrates the general iterative framework. At a high level, given a sparsification method, our framework iteratively removes unimportant edges based on the edge importance masks generated by the method at each iteration. In detail, the method can generate either a separate edge importance mask \(\mathcal{M}_{i}\) for each input graph \(G_{i}\) or a joint edge importance mask \(\mathcal{M}\) shared by all input graphs \(\mathcal{G}=\{G_{1},G_{2},\cdots\}\). These edge importance masks indicate the relevance of edges to the task's labels. In our setting, we also allow training the masks simultaneously with the model. Ideal edge masks are binary, where zeros represent unimportant edges to be removed. In reality, many models (_e.g._ GNNs ((72; 76)) learn soft edge importance masks with values between [0,1]. In each iteration, our framework removes either the edges with zero values in the masks (if binary) or a fixed percentage \(p\) of edges with the lowest importance scores in the masks. We present the framework of iterative sparsification in Algorithm 1, where \(\mathcal{G}^{i}\) denotes the set of sparsified graphs at iteration \(i\), and \(G_{j}^{i}\) denotes the \(j\)-th graph in the set \(\mathcal{G}^{i}\).
Though existing works ((28; 51)) have proposed different ways to define the "importance" of an edge and thus they generate different sparse graphs, we _believe that a direct and effective way to evaluate these methods is to track the performance of these sparsified graphs_ under this iterative framework. The trend of the performance reveals the relevance of the remaining edges to the predicted labels.
### Strategies
#### 3.2.1. Trained Mask (S1+S2).
We aim to learn a joint edge importance mask \(\mathcal{M}\in\{0,1\}^{n\times n}\) along with the training of a GNN model,
Figure 1. General iterative framework of sparsification. This framework progressively eliminates noisy edges from input brain graphs by learning an edge importance mask for each/all graph(s). The edge importance mask(s) can be generated from a well-trained GNN model or trained simultaneously with a GNN model. Important edges are depicted in orange, while noisy edges are shown in grey. Dashed lines with purple crosses represent the removed edges in the sparsified graphs.
as shown in Figure 2. Each entry in \(\mathcal{M}\) represents if the corresponding edge in the original input graph should be kept (value 1) or not (value 0). Directly learning the discrete edge mask is hard as it cannot generate gradients to propagate back. Thus, at each iteration, we learn a soft version of \(\mathcal{M}\), where each entry is within \([0,1]\) and reflects the relative importance of each edge. Considering the symmetric nature of the adjacency matrix for undirected brain graphs, we require the learned edge importance mask to be symmetric. We design the soft edge importance mask as \(\sigma(\Phi^{T}+\Phi)\), where \(\Phi\) is a matrix to be learned and \(\sigma\) is the \(\mathtt{Sigmoid}\) function. A good initialization of \(\Phi\) can boost the performance and accelerate the training speed. Thus, we initialize this matrix with the gradient map (Section 3.2.2) from the previous iteration (Step 5 in Figure 2). Furthermore, following [72], we regularize the training of \(\Phi\) by requiring \(\sigma(\Phi^{T}+\Phi)\) to be sparse. Thus we apply a \(l_{1}\) regularization on \(\sigma(\Phi^{T}+\Phi)\). In summary, we have the following training objective:
\[\min\mathcal{L}(f(\mathbf{A}\odot\sigma(\Phi^{T}+\Phi),\mathbf{X}),\mathcal{ Y})+\lambda\sum_{ij}\sigma(\Phi^{T}+\Phi)_{ij} \tag{1}\]
Figure 2. Training process of IGS. At iteration \(i\), IGS takes a set of input graphs and initializes its joint edge importance mask using the joint gradient map from the previous iteration. It trains the GNN model and the edge importance mask together, followed by sparsifying all input graphs using the obtained mask. Normal training is then conducted on the sparsified graphs. The gradient information is later extracted by computing a joint gradient map. Finally, IGS feeds the sparsified graphs to the next iteration and uses the joint gradient map to initialize the subsequent joint edge importance mask. IGS is model-agnostic and can be seamlessly integrated with existing GNN models.
where \(\odot\) denotes the Hadamard product; \(\mathcal{L}\) is the Cross-Entropy loss; \(\lambda\) is the regularization coefficient. We optimize the joint mask across all training samples in a batch-training fashion to achieve our objective of learning a shared mask. Subsequently, we convert this soft mask into an indicator matrix by assigning zero values to the lowest \(p\) percentage of elements:
\[\mathcal{M}[i,j]=\begin{cases}0&\text{if }\sigma(\Phi^{T}+\Phi)_{ij}\text{ in lowest }p\%\\ 1&\text{otherwise}\end{cases} \tag{2}\]
The indicator matrix \(\mathcal{M}\) can then be used to sparsify the input graph through an element-wise multiplication, _e.g._\(G_{i}^{\prime}=\mathcal{M}\odot G_{i}\).
#### 3.2.2. Joint Gradient Information (S3)
Inspired from the evidence in the computer vision domain that gradient information may encode data and task-relevant information and may contribute to the explainability of the model (Golovolovolov et al., 2013; Golovolov et al., 2013; Golovolov and LeCun, 2015), we utilize the gradient information, i.e., gradient maps to initialize and guide the learning of the edge importance mask.
Step 4 in Figure 2 illustrates the general idea of generating a joint gradient map by combining gradient information from each training graph. Each training graph \(G_{i}\) has \(k\) gradient maps \(\nabla f_{j}(G_{i})\), \(j=1,2,\cdots,k\), each corresponding to the output in class \(j\) (Section 2). Instead of using the "saliency maps" (Selig and Golovolov, 2015), which consider only the gradient maps from the predicted class, we leverage all the gradient maps as they provide meaningful knowledge. For \(G_{1},\dots,G_{n}\in\mathcal{G}_{\text{train}}\), we compute the unified mask of class j as the sum of the absolute values of each gradient map, represented as
\[\bigcup f_{j}=\sum_{i=1}^{t}|\nabla f_{j}(G_{i})| \tag{3}\]
By summing the unified masks of all classes, we generate the joint edge gradient map denoted as \(\mathbf{T}=\sum_{j=1}^{k}\bigcup f_{j}\).
#### 3.2.3. Algorithm
We incorporate these three strategies into IGS and outline our method in Algorithm 2:
```
1:Input Graph Dataset \(\mathcal{G}^{1}\), Training Set Index \(\mathbb{1}_{\text{Train}}\), Validation Set Index \(\mathbb{1}_{\text{Val}}\), Removing Percentage \(p\), Number of Iterations \(N\), GNN model, Regularization Coefficient \(\lambda\)for\(i=1,\dots,\) Ndo// Step 1:GNN Training with Edge Importance Mask if\(i==1\)then Initialize \(\Phi\) using Xavier normal initiation. else Initialize \(\Phi\) using the previous joint gradient map \(\mathbf{T}^{(i)}\) \(\sigma(\Phi^{T}+\Phi)\leftarrow\) Train (GNN, \(\mathcal{G}^{i}\), \(\mathcal{Y}\), \(\mathbb{1}_{\text{train}}\), \(\lambda\)). (Equation (1)) Obtain joint Edge Importance Mask \(\mathcal{M}\) following Equation (2) // Step 2:Sparsification \(G_{j}^{i+1}\leftarrow\mathcal{M}\odot G_{j}^{i}\), \(\forall j\) // Step 3:Normal Training with Sparsified Graphs Validation loss \(L^{i}\), GNN_Trained = Train&Val (GNN, \(\mathcal{G}^{i+1}\), \(\mathcal{Y}\), \(\mathbb{1}_{\text{Train}}\), \(\mathbb{1}_{\text{Val}}\)) // Step 4:Leveraging Gradient Information T(\(i\)+1) \(\leftarrow\) JointGradient(GNN_Trained, \(\mathcal{G}^{i+1}\), \(\mathbb{1}_{\text{Train}}\)) OUTPUT:\(\mathcal{G}^{i}\) with smallest \(L^{i}\)
```
**Algorithm 2** Interpretable Graph Sparsification: IGS
## 4. Empirical Analysis
In this section, we aim to answer the following research questions using our iterative framework: (Q1) Is learning a joint edge importance mask better than learning a separate mask for each graph? (Q2) Does simultaneous training of the edge importance mask with the model yield better performance than training the mask separately from the trained model? (Q3) Does the gradient information help with graph sparsification? (Q4) Is our method IGS interpretable?
### Setup
#### 4.1.1. Dataset
We use the WU-Minn Human Connectome Project (HCP) 1200 Subjects Data Release as our benchmark dataset to evaluate our method and baselines (Krizhevsky et al., 2015). The pre-processed brain graphs can be obtained from ConnectomeDB (Krizhevsky et al., 2015). These brain graphs are derived from the resting-state functional magnetic resonance imaging (rs-fMRI) of 812 subjects, where no explicit task is being performed. Predictions using rs-fMRI are generally harder than task-based fMRI (Wang et al., 2017). The obtained brain graphs are fully connected, and the edge weights are computed from the correlation of the rs-fMRI time series between each pair of brain regions (Selig and Golov, 2015). The parcellation of the brain is completed using Group-ICA with 100 components (Golovolov et al., 2013; Golovolov et al., 2013; Golovolov et al., 2013; Golovolov et al., 2013; Golov et al., 2013; Golov et al., 2013), which results in 100 brain regions comprising the nodes of our brain graphs. Additionally, a set of cognitive assessments were performed on each subject, which we utilized as cognitive labels in our prediction tasks. Specifically,
we utilize the scores from the following cognitive domains as our labels, which incorporate age adjustment (Kutra et al., 2018):
* PicVocab (Picture Vocabulary) assesses language/vocabulary comprehension. The respondent is presented with an audio recording of a word and four photographic images on the computer screen and is asked to select the picture that most closely matches the word's meaning.
* ReadEng (Oral Reading Recognition) assesses language/reading decoding. The participant is asked to read and pronounce letters and words as accurately as possible. The test administrator scores them as right or wrong.
* PicSeq (Picture Sequence Memory) assesses the Open of episodic memory. It involves recalling an increasingly lengthy series of illustrated objects and activities presented in a particular order on the computer screen.
* ListSort (List Sorting) assesses working memory and requires the participant to sequence different visually- and orally-presented stimuli.
* CardSort (Dimensional Change Card Sort) assesses the cognitive flexibility. Participants are asked to match a series of bivalent test pictures (e.g., yellow balls and blue trucks) to the target pictures, according to color or shape. Scoring is based on a combination of accuracy and reaction time.
* Flanker (Flanker Task) measures a participant's attention and inhibitory control. The test requires the participant to focus on a given stimulus while inhibiting attention to stimuli flanking it. Scoring is based on a combination of accuracy and reaction time.
More details can be found in ConnectomeDB (Kutra et al., 2018). These scores are continuous. In order to use them for graph classification, we assign the subjects achieving scores in the top third to the first class and the ones in the bottom third to the second class.
#### 4.1.2. Baselines
We outline the baselines used in our experiments.
Grad-Indi (Brasel et al., 2017).This method obtains the edge importance mask for each individual graph from a trained GNN model. In contrast to the gradient information (Strategy S3) proposed in Section 3.2.2, a gradient map of each sample is generated for the predicted class \(C_{i}\): \(\mathbf{T}_{i}=\nabla f_{C_{i}}(G_{i})\odot\nabla f_{C_{i}}(G_{i})\)(Brasel et al., 2017). Later, the edge importance mask \(\mathcal{M}_{i}\) for \(G_{i}\) is generated based on Equation (2).
Grad-JointWe adapt Grad-Indi (Brasel et al., 2017) to incorporate our proposed strategies (S1+S3) and learn an edge importance mask _shared by all graphs_ from a _trained_ GNN model. Specifically, we leverage the method described in Section 3.2.2 that generates the joint gradient map to obtain the joint importance mask.
Grad-TrainedWe further modify Grad-Indi (Brasel et al., 2017) to train the joint edge mask concurrently with the GNN training (S2). We also use the joint gradient map (Section 3.2.2) to initialize the edge importance mask (Strategies S1+S2+S3). The main differences of Grad-Trained from IGS are that: (1) it does not require symmetry of the edge mask; (2) it does not require edge mask sparsity (without \(l_{1}\) regularization).
GNNExplainer-Indi (Fan et al., 2018).This method trains an edge important mask for each individual graph after the GNN model is trained. We follow the code provided by (Kutra et al., 2018).
GNNExplainer-JointAdapted from (Fan et al., 2018), this model trains a _joint edge important mask for all graphs_ (Strategy S1).
GNNExplainer-TrainedAdapted from (Fan et al., 2018), this method simultaneously trains a _joint edge important mask_ and the GNN model (Strategies S1+S2). Compared with IGS, this method does not use gradient information.
BrainNNExplainer (Kutra et al., 2018).This method (also known as IBGNN) trains a _joint edge important mask_ for all graphs after the GNN is trained_. It is slightly different from GNNExplainer-Joint in terms of objective functions. We follow the original setup in (Kutra et al., 2018).
BrainGNN (Shen et al., 2018).This method does not explicitly perform the graph sparsification task, but _uses node pooling to identify important subgraphs_. It learns to preserve important nodes and all the connections between them. We follow the original setup in (Shen et al., 2018).
#### 4.1.3. Training Setup
To fairly evaluate different methods under the iterative framework, we adopt the same GNN architecture (Kutra et al., 2018), hyper-parameter settings, and training framework. We set the number of convolutional layers to four, the dimension of the hidden layers to 256, the dropout rate to 0.5, the batch size to 16, the optimizer to Adam, the learning rate to 0.001, and the regularization coefficient \(\lambda\) to 0.0001. Note that though we use the GNN from (Kutra et al., 2018), IGS is model-agnostic, and we provide the results of other backbone GNNs in Table 4. For each prediction task, we shuffle the data and take four different data splits. The train/val/test split is 0.7/0.15/0.15. To reduce the influence of imbalances, we manually ensure each split has equal labels. In each iteration, we adopt early stopping (Shen et al., 2018) and set the patience to 100 epochs. We stop training if we cannot observe a decrease in validation loss in the latest 100 epochs. We fix the removing ratio \(p\%\) to be 5% per iteration. In the iterative sparsification, we run a total of 55 iterations and use the validation loss of the sparsified graphs as the criterion to select the best iteration (Step 3 in Figure 2). We present the average and standard deviation of test accuracies over four splits, using the model obtained from the best iteration. **The code is available at [https://github.com/motivations/IGS.git](https://github.com/motivations/IGS.git)**.
### (Q1-Q3) Graph Classification under the Iterative Framework
In Table 2, we present the results of IGS with the eight baselines mentioned in section 4.1.2. The first row represents the prediction task we study; the second row represents the performance averaged across four different splits using the original graph; and the rest of the rows denote the performance of other baselines. Notably, for better comparison across different baselines, the last column shows the average rank of each method. Below we present our observations from Table 2:
First, learning a joint mask contributes to a better performance than learning a mask for each graph separately. We can start by comparing the performance between GNNExplainer-Joint and GNNExplainer-India as well as Grad-Joint and Grad-India. The performance disparity between the methods in each pair is notable and consistent across all prediction tasks. Notably, Grad-Joint (rank: 4.33) outperforms Grad-India (rank: 7.67) by a considerable margin, while GNNExplainer-Joint (rank: 4.67) ranks significantly
higher than GNNExplainer-Indi (rank: 8.33). Using a joint mask instead of individual masks can provide up to 6.7% boost in accuracy, validating our intuition in section 3.2.2 that a joint mask is more robust to sample-wise noise.
Second, training the mask and the GNN model simultaneously yields better results than obtaining the mask from the trained model. We can see this by comparing the performance between the Trained and the Joint variants of Grad and GNNExplainer. Changing from post-training to joint-training can provide up to 3.4% performance improvements, as demonstrated in the ReadEng task by the two variants of GNNExplainer. Even though in some tasks the post-training approach may outperform the trained approach (_e.g._ Grad-Joint and Grad-Trained in the PieVocab task), the trained approach has a higher average rank than the post-training approach (_e.g._ 3.83 vs. 4.33 for Grad and 3.17 vs. 4.67 for GNNExplainer). In addition, the better performance of IGS over BrainNNExplainer also demonstrates the effectiveness of obtaining the edge mask during training rather than after training.
Third, incorporating gradient information helps improve classification performance. We can see this by first comparing the performance of Grad-Joint and Grad-Trained against the original graphs. The use of gradient information can provide up to 5.1% higher accuracy, though the improvement depends on the task. Furthermore, since the main difference between GNNExplainer-Trained and IGS lies in the use of gradient information, the consistent superior performance of IGS strengthens this conclusion.
Fourth, we compare the performance of the baselines against the performance of the original graphs (second row). Grad-Indi(Gardard et al., 2017) and GNNExplainer-Indi(Srivastava et al., 2017) are implementations that faithfully follow their original formulation or are provided directly by the authors. These two approaches fail to achieve any performance improvements through iterative sparsification, with the exception of Grad-Indi in the task of PicVocab and ReadEng. This raises the question of whether these existing instance-level approaches can identify the most meaningful edges in noisy graphs. These methods may be vulnerable to severe sample-wise noise. On the contrary, with our suggested modifications, the joint and trained versions can remove the noise and provide up to 5.1% performance boost compared to the base GCN method applied to the original graphs. However, the improvement is dataset-dependent. For instance, GNNExplainer-Trained provides decent performance boosts in PicVocab, ReadEng, and Flanker, but degrades in PicSeq, ListSort, and CardSort.
Finally, our proposed approach, IGS, achieves the **best** performance across all prediction tasks, demonstrated by its highest rank among all methods. Compared with the performance on the original graphs, IGS can provide consistent performance boost across all prediction tasks, with the exception of ListSort, which is a challenging task that no baseline surpasses the original performance. Furthermore, using the sparsified graph identified by IGS generally results in less variance in accuracy and leads to better stability when compared to the original graphs, with the exception on the PicSeq task. In addition, the superior performance of IGS over BrainGNN demonstrates the effectiveness of using edge importance masks as opposed to node pooling.
Graph SparsityIn Table 3, we present the final average sparsity of the graphs obtained by IGS over four data splits. We observe that with significantly fewer edges retained, IGS can still achieve up to 5.1% performance boost.
### (Q4) Interpretability of IGS
We now evaluate the interpretability of the edge masks derived for each of our prediction tasks.
SetupWe assign anatomical labels to each of the 100 components comprising the nodes of our brain networks by computing the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & PicVocab & ReadEng & PicSeq & ListSort & CardSort & Flanker & Average Rank \\ \hline GCN (Original Graphs) & 52.7\(\pm\)3.77 & 55.4\(\pm\)3.53 & 51.9\(\pm\)2.18 & 52.1\(\pm\)2.53 & 56.6\(\pm\)6.93 & 48.91\(\pm\)5.83 & - \\ \hline Grad-Indi & 53.4\(\pm\)1.65 & 53.7\(\pm\)9.08 & 49.3\(\pm\)3.71 & 48.7\(\pm\)6.94 & 46.9\(\pm\)4.65 & 50.7\(\pm\)2.76 & 7.67 \\ Grad-Joint & 57.8\(\pm\)3.54 & 58.2\(\pm\)2.38 & 50.1\(\pm\)1.87 & 48.9\(\pm\)5.19 & 52.4\(\pm\)5.02 & 51.5\(\pm\)5.94 & 4.33 \\ Grad-Trained & 55.5\(\pm\)3.29 & 60.0\(\pm\)1.38 & 49.5\(\pm\)4.12 & 50.2\(\pm\)2.23 & 56.37\(\pm\)6.76 & 51.6\(\pm\)6.43 & 3.83 \\ GNNExplainer-Indi & 49.7\(\pm\)3.58 & 55.3\(\pm\)4.08 & 48.9\(\pm\)1.29 & 44.8\(\pm\)2.53 & 52.1\(\pm\)2.88 & 47.3\(\pm\)1.58 & 8.33 \\ GNNExplainer-Joint & 56.4\(\pm\)4.794 & 55.8\(\pm\)3.73 & 52.0\(\pm\)2.84 & 50.1\(\pm\)2.01 & 53.5\(\pm\)5.82 & 50.3\(\pm\)5.81 & 4.67 \\ GNNExplainer-Trained & 56.8\(\pm\)3.10 & 59.2\(\pm\)2.56 & 51.4\(\pm\)3.53 & 51.2\(\pm\)2.01 & 56.0\(\pm\)4.71 & 50.9\(\pm\)2.01 & 3.17 \\ BrainNNExplainer & 57.0\(\pm\)3.77 & 55.7\(\pm\)3.50 & 50.3\(\pm\)4.67 & 49.8\(\pm\)4.47 & 52.4\(\pm\)5.30 & 50.9\(\pm\)3.03 & 4.83 \\ BrainGNN & 53.0\(\pm\)3.23 & 47.5\(\pm\)5.20 & 50.7\(\pm\)3.13 & 50.9\(\pm\)3.13 & 50.1\(\pm\)1.12 & 49.0\(\pm\)2.27 & 6.67 \\ IGS & **57.8\(\pm\)3.16** & **60.1\(\pm\)2.78** & **53.0\(\pm\)4.66** & **51.8\(\pm\)2.12** & **57.0\(\pm\)5.49** & **52.1\(\pm\)1.97** & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Results of test accuracies of different approaches evaluated on six prediction tasks (PicVocab, ReadEng, PicSeq, ListSort, CardSort, and Flanker) across four data splits generated by different random seeds. We report the mean and standard deviation for each of them. The first row denotes the performance using the original graph trained by GCN (Srivastava et al., 2017); the last column denotes the average rank of each method. The best result is marked in bold.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & PicVocab & ReadEng & PicSeq & ListSort & CardSort & Flanker \\ \hline Sparsity(\%) & 22.5 & 35.5 & 35.5 & 30.0 & 25.0 & 25.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Final sparsity of the sparsified brain graphs identified by IGS averaged over different splits. The initial sparsity is 50% by thresholding. IGS can remove more than half of the edges while achieving up to 5.1% performance boost.
largest overlap between regions identified in the Cole-Anticevicap parcellation [30]. We then obtained the edge masks from the best-performing iteration of each prediction task and assessed the highest-weighted edges in each mask.
_Results._ Since our IGS model performed best in the language-related prediction tasks, ReadEng and PicVocab, we focus our interpretability analysis on this domain. There is ample evidence in the neuroscience literature that supports the existence of an intrinsic language network that is perceptible during resting state [11; 36; 59]; thus, it is unsurprising that our rs-fMRI based brain networks are predictive of language task performance. It has also been well established for over a century that the language centers (including Broca's area, Wernicke's area, the angular gyrus, etc.) are characteristically left-lateralized in the brain [12; 65]. In both ReadEng and PicVocab, the majority of the highest weighted edges retained in the masks involved brain regions localized to the left hemisphere, falling in line with the expectations for a language task.
_PicVocab._ Figures 3 and 4 depict the progression of the edge masks at both the node and subnetwork level over the training iterations towards optimal edge mask in both the ReadEng and PicVocab tasks. Evaluating the edge masks at the subnetwork level offers valuable insights into which functional connections are most important for the prediction of each task. The PicVocab edge mask homed in on functional connections involving the Cingulo-Operular (CO) network, specifically between CO and the Dorsal Attention (DA), Visual1 (V1), Visual2 (V2) and Frontoparietal (FP) networks. The CO network has been shown to be implicated in word recognition [60], and its synchrony with other brain networks identified here may represent the stream of neural processing related to the PicVocab task, in which subjects respond to an auditory stimulus of a word and are prompted to choose the image that best represents the word. Connectivity between the Auditory (AD) and V2 networks is also evident in the PicVocab edge mask, suggesting the upstream integration of auditory and visual stimuli involved in the PicVocab task are also predictive of task performance.
_ReadEng._ The IGS model also found edge mask connections between the V1 network and the CO, Language (LA) and DA networks, as well as CO-LA and CO-AD connections, to be most predictive of ReadEng performance. This task involves the subject reading aloud words presented on a screen. From our results, it follows that the ability of Vis1 to integrate with networks responsible for language processing (LA and CO) and attention (DA), as well as the capacity for functional synchrony between the language-related networks (CO-LA), would be predictive of overall ReadEng performance. The importance of the additional CO-AD connectivity identified by our model also suggests that the ability of the CO language network to integrate with auditory centers may be involved in the neural processes responsible for the proper pronunciation of the words given by visual cues.
Figure 3. Weighted brain network edge masks at both node (top row) and subnetwork level (bottom row - computed as the average of corresponding edges) for PicVocab task. Early, middle, and final phases of training are depicted from left to right, and high-importance subnetworks are highlighted in red. We find that IGS gradually removes noisy edges and its final edge importance mask can provide high-quality interpretations. Highlighted (Orange) label names represent the regions that are meaningful in this task. Brain network labels and abbreviations: Auditory (AD), Cingulo-Operular (CO), Dorsal Attention (DA), Default (DF), Frontoparietal (FP), Language (LA), Somatomotor (SO), Visual 1 (V1), Visual 2 (V2).
Key take-awaysOverall, in addition to the IGS model's superior classification performance, our results suggest that the iterative pruning of the IGS edge masks during training does indeed retain important and neurologically meaningful edges while removing noisy connections. While it has been shown in the literature that resting-state connectivity can be used to predict task performance (Han et al., 2017; Wang et al., 2018; Wang et al., 2019), the ability of the IGS model to sparsify the resting state brain graph to clearly task-relevant edges for prediction of task performance further underscores the interpretability of the resultant edge masks.
## 5. Related Work
### Graph Explainability
Our work is related to explainable GNNs given that we identify important edges/subgraphs that account for the model predictions. Some explainable GNNs are "perturbation-based", where the goal is to investigate the relation between output and input variations. GNNExplainer (GNNExplainer, 2017) learns a soft mask for the nodes and edges, which explains the predictions of a well-trained GNN model. SubgraphX (Xu et al., 2018) explains its predictions by efficiently exploring different subgraphs with a Monte Carlo tree search. Another approach for explainable GNNs is surrogate-based; the methods in this category generally construct a simple and interpretable surrogate model to approximate the output of the original model in certain neighborhoods (Xu et al., 2018). For instance, GraphLime (Grapel et al., 2017) considers the N-hop neighboring nodes of the target node and then trains a nonlinear surrogate model to fit the local neighborhood predictions; ReIEx (Xu et al., 2018) first uses a GNN to fit the BFS-generated datasets and then generates soft masks to explain the predictions; PGM-Explainer (Xu et al., 2018) generates local datasets based on the influence of randomly perturbing the node features, shrinks the size of the datasets via the Grow-Shrink algorithm, and employs a Bayesian network to fit the datasets. In general, most of these methods focus on the node classification task and make explanations for a single graph, which is not applicable to our setting. Others only apply to simple graphs, which cannot handle signed and weighted brain graphs (Grapel et al., 2017; Xu et al., 2018). Additionally, most methods generate explanations after a GNN is trained. Though some methods achieve decent results in explainability-related metrics (_e.g._ fidelity scores (Wang et al., 2018)), it remains unclear whether their explanations can necessarily remove noise and retain the "important" part of the original graph, which improves the classification accuracy.
### Graph Sparsification
Compared to the explainable GNN methods, graph sparsification methods explicitly aim to sparsify graphs. Most of the existing methods are unsupervised (Xu et al., 2018). Conventional methods reduce the size of the graph through approximating pairwise distances (Wang et al., 2018), preserving various kinds of graph cuts (Wang et al., 2018), node degree distributions (Golovolov et al., 2016; Wang et al., 2018), and using some graph-spectrum based approaches (Beng et al., 2016; Wang et al., 2018; Wang et al., 2019). These methods aim at preserving the structural information of the original input graph without using the label information, and they assume that the input graph is unweighted. Relatively fewer supervised works have been proposed. For example, NeuralSparse (Xu et al., 2018) builds a parametrized network to learn a k-neighbor subgraph by limiting each node to have at most \(k\) edges. On top of NeuralSparse, PTDNet (Wang et al., 2018) removes the k-neighbor assumption, and instead, it employs a low-rank constraint on the learned subgraph to discourage edges connecting multiple communities. Graph Condensation (Golovolov et al., 2016) proposes to parameterize the condensed graph structure as a function of condensed node features and optimizes a gradient-matching training objective. Despite the new insights offered by these methods, most of them focus exclusively on node classification, and their training objectives are built on top of that. A work that shares similarity to our proposed method, IGS, is BrainNNExplainer (Grapel et al., 2017) (also known as IBGNN). It is inspired by GNNExplainer (GNNExplainer, 2017) and obtains the joint edge mask in a post-training fashion. On the other hand, our proposed method, IGS, trains a joint edge mask along with the backbone model and incorporates gradient information in an iterative manner. Another line of work leverages node pooling to identify important subgraphs, and learns to preserve important nodes and all the connections between them. One representative work is BrainGNN (Wang et al., 2018). However, the connections between preserved nodes are not necessarily all informative, and some may contain noise.
### Saliency Maps
Saliency maps are first proposed to explain the deep convolutional neural network models in image classification tasks (Wang et al., 2018). Specifically, the method proposes to use the gradients backpropagated from the predicted class as the explanations. Recently, (Xu et al., 2018) introduces the concept of saliency maps to graph neural networks, employing squared gradients to explain the underlying model. Additionally, (Xu et al., 2018) suggests using graph saliency to identify regions of interest (ROIs). In general, the gradients backpropagated from the output logits can serve as the importance indicators for model predictions. In this work, inspired by the line of saliency-related works, we leverage the gradient information to guide our model.
## 6. Conclusions
In this paper, we studied neural-network-based graph sparsification for brain graphs. By introducing an iterative sparsification framework, we identified several effective strategies for GNNs to filter out noisy edges and improve the graph classification performance. We combined these strategies into a new interpretable graph classification model, IGS, which improves the graph classification performance by up to 5.1% with 55% fewer edges than the original graphs. The retained edges identified by IGS provide neuroscientific interpretations and are supported by well-established literature.
###### Acknowledgements.
We thank the anonymous reviewers for their constructive feedback. This material is based upon work supported by the National Science Foundation under IIS 2212143, CAREER Grant No. IIS 1845491, a Precision Health Investigator Award at the University of Michigan, and AWS Cloud Credits for Research. Data were provided [in part] by the Human Connectome Project, WU-Minn Consortium (PIs: D. Van Essen and K. Ugurbul; 1U54MH091657) funded by the 16 NIH Institutes and Centers that support the NIH Blueprint for Neuroscience Research; and by the McDonnell Center for Systems Neuroscience at Washington University. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the
authors and do not necessarily reflect the views of the National Science Foundation or other funding parties.
|
2306.04600 | Uncovering solutions from data corrupted by systematic errors: A
physics-constrained convolutional neural network approach | Information on natural phenomena and engineering systems is typically
contained in data. Data can be corrupted by systematic errors in models and
experiments. In this paper, we propose a tool to uncover the spatiotemporal
solution of the underlying physical system by removing the systematic errors
from data. The tool is the physics-constrained convolutional neural network
(PC-CNN), which combines information from both the systems governing equations
and data. We focus on fundamental phenomena that are modelled by partial
differential equations, such as linear convection, Burgers equation, and
two-dimensional turbulence. First, we formulate the problem, describe the
physics-constrained convolutional neural network, and parameterise the
systematic error. Second, we uncover the solutions from data corrupted by large
multimodal systematic errors. Third, we perform a parametric study for
different systematic errors. We show that the method is robust. Fourth, we
analyse the physical properties of the uncovered solutions. We show that the
solutions inferred from the PC-CNN are physical, in contrast to the data
corrupted by systematic errors that does not fulfil the governing equations.
This work opens opportunities for removing epistemic errors from models, and
systematic errors from measurements. | Daniel Kelshaw, Luca Magri | 2023-06-07T17:04:36Z | http://arxiv.org/abs/2306.04600v2 | # Uncovering solutions from data corrupted by systematic errors:
###### Abstract
Information on natural phenomena and engineering systems is typically contained in data. Data can be corrupted by systematic errors in models and experiments. In this paper, we propose a tool to uncover the spatiotemporal solution of the underlying physical system by removing the systematic errors from data. The tool is the physics-constrained convolutional neural network (PC-CNN), which combines information from both the system's governing equations and data. We focus on fundamental phenomena that are modelled by partial differential equations, such as linear convection, Burgers' equation, and two-dimensional turbulence. First, we formulate the problem, describe the physics-constrained convolutional neural network, and parameterise the systematic error. Second, we uncover the solutions from data corrupted by large multimodal systematic errors. Third, we perform a parametric study for different systematic errors. We show that the method is robust. Fourth, we analyse the physical properties of the uncovered solutions. We show that the solutions inferred from the PC-CNN are physical, in contrast to the data corrupted by systematic errors that does not fulfil the governing equations. This work opens opportunities for removing epistemic errors from models, and systematic errors from measurements.
## 1 Introduction
Model estimates and experimental measurements can be affected by systematic errors. Systematic errors can arise for a number of reasons: faulty experimental sensors [1], or low-fidelity numerical methods in which closing nonlinear multiscale equations can add bias, to name a few. The detection and removal of systematic errors has numerous applications, ranging from correcting biased experimental observations, to enhancing results obtained from low-fidelity simulations [2]. In the field of fluid dynamics, experimental measurement of a flow-field is an inherently challenging process. Measurement techniques are often limited given the sensitivity of the flow to immersed sensors and probes, the introduction of which can fundamentally alter the intended behaviour of the system [3]. In many cases, non-intrusive methods such as particle-image-velocimetry are preferred, the results of which provide no guarantee of satisfying the underlying physical laws. Removing systematic errors that are present in these observations would yield the underlying solution of the system. Approaches for recovering a divergence-free field exist, but focus predominantly on filtering out small quantities of stochastic noise [4, 5].
Irrespective of the research domain, a contemporary issue in the field of modelling and simulation is the computational expense of running high-fidelity simulations [6]. In many cases, practitioners rely on lower-fidelity methods, accepting assumptions or approximations, which invoke degrees of model error.
We posit that the obtained low-fidelity state can be considered as a corrupted observation because the underlying solution of the system subjected to a form of systematic error. Providing a mapping from the corrupted-state to the underlying solution would allow the inference of high-fidelity results given only low-fidelity data. In practice, we observe that verifying that observations are characteristic of a given system is more straightforward than determining the solution itself. The former requires ensuring that the governing equations are satisfied, whilst the latter requires a method to produce a candidate solution. The overarching goal of this paper is to design a method to remove large-amplitude systematic errors from data to uncover the true solution of the partial differential equation.
Systematic error removal in the literature typically considers the case of aleatoric noise-removal, which is a term attributed to methods that remove small, unbiassed stochastic variations from the underlying solution. This has been achieved through various methods, including filtering methods [7]; proper orthogonal decomposition [8, 9]; and the use of autoencoders [10]. Super-resolution can be considered a form of systematic error removal because its maps low-resolution observations to their equivalent high-resolution representations. In this case, the super-resolution task seeks to introduce additional information, which are not present in the original observation. This resolves the solution at a higher spatial frequency, which is a subtly different task to removing errors from observations. Notable works in this area include methods based on a library of examples [11] and sparse representation in a library [12]. More recent work has employed machine learning methods, including methods based on convolutional neural networks [13], and generative adversarial methods [14].
By exploiting the universal function approximation property of neural networks, it is possible to obtain numerical surrogates for function mappings or operators [e.g., 15, 16]. For a network parameterised by weights, there exists an optimal set of weights that results in the desired mapping; the challenge being to realise these through an optimisation process. This optimisation is formulated as the minimisation of a loss function, the evaluation of which offers a distance metric to the desired solution. When considering forms of structured data, convolutional neural networks excel due to their ability to exploit spatial correlations in the data, which arise from the use of localised kernel operations. In the case of data-driven problems, a typical approach is to minimise a measure of the distance between network realisations and the associated true value. In the absence of knowledge about the true state, it is possible to impose prior knowledge onto the system. For a physical problem, the classical means to accomplish this is through a form of physics-based regularisation, which introduces penalty terms in the loss function to promote physical solutions, i.e., penalising realisations that do not conform to the properties of the system [17]. Inspired by Lagaris et al. [17], Raissi et al. [18] introduced physics-informed neural networks (PINNs), which replace classical numerical solvers with surrogate models. The loss function leverages automatic differentiation to obtain numerical realisations of the gradients of the output (the predicted quantity) with respect to the input, which usually contains the spatiotemporal coordinates.
In this work, we propose a method for removing systematic errors with a physics-constrained machine learning approach. Imposing prior knowledge of the physics allows us to realise a mapping from the corrupted-state to the true-state in the form of a convolutional neural network, as shown in Figure 1. We emphasise that this is not a PINN approach [18] because we do not leverage automatic differentiation to obtain gradients of outputs with respect to inputs to constrain the physics. This network employs a time-batching scheme to effectively compute the time-dependant residuals without resorting to recurrent-based network architectures. Realisations of the network that do not conform to the given partial differential equation are penalised, which ensures that the network learns to produce physical predictions. Ultimately, this allows the effective removal of additive systematic error from data.
In section SS2, we formulate the problem of the systematic error removal. An overview of convolutional neural networks is provided in section SS3, which highlights how we can exploit the spatial invariance and describes the architecture employed for the systematic error removal task. We detail the methodology in
section SS3.1, which provides justifications for the design of the loss function, before showcasing results in section SS5. We provide a form of parameterised systematic error using a multimodal function [19], which allows us to explore the effect of high wavenumbers and magnitudes of the systematic error. Results are obtained from three systems of increasing physical complexity: linear convection-diffusion, nonlinear convection-diffusion, and a two-dimensional turbulent flow. We further analyse results for the two-dimensional turbulent flow case, investigating the physical coherence of network predictions. Section SS7 ends the paper.
## 2 Problem formulation
We consider physical systems that can be modelled by partial differential equations (PDEs). Upon suitable spatial discretisation, a PDE with boundary conditions is viewed as a dynamical system in the form of
\[\mathcal{R}(\mathbf{u};\lambda)\equiv\partial_{t}\mathbf{u}-\mathcal{N}(\mathbf{u}; \lambda), \tag{1}\]
where \(\mathbf{x}\in\Omega\subset\mathbb{R}^{n}\) denotes the spatial location; \(t\in[0,T]\subset\mathbb{R}_{\geq 0}\) is the time; \(\mathbf{u}:\Omega\times[0,T]\to\mathbb{R}^{m}\) is the state; \(\lambda\) are physical parameters of the system; \(\mathcal{N}\) is a sufficiently smooth differential operator; \(\mathcal{R}\) is the residual; and \(\partial_{t}\) is the partial derivative with respect to time. A solution of the PDE \(\mathbf{u}^{*}\) is the function that makes the residual vanish, i.e., \(\mathcal{R}(\mathbf{u}^{*};\lambda)=0\).
We consider that the observations on the system's state, \(\mathbf{\zeta}(\mathbf{x},t)\), are corrupted by an additive systematic error
\[\mathbf{\zeta}(\mathbf{x},t)=\mathbf{u}^{*}(\mathbf{x},t)+\mathbf{\phi}(\mathbf{x}), \tag{2}\]
where \(\mathbf{\phi}\) is the stationary systematic error, which is spatially varying. For example, additive systematic errors can be caused by miscalibrated sensors [1], or modelling assumptions [20]. The quantity in Eq. (2) constitutes the corrupted dataset, which fulfils the inequality
\[\mathcal{R}(\mathbf{\zeta}(\mathbf{x},t);\lambda)\neq 0. \tag{3}\]
Given the corrupted state, \(\mathbf{\zeta}\), we wish to uncover the underlying solution to the governing equations, \(\mathbf{u}^{*}\), which is referred to as the true state. Mathematically, we need to compute a mapping, \(\mathbf{\eta}_{\mathbf{\theta}}\) such that
\[\mathbf{\eta}_{\mathbf{\theta}}:\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\mapsto\mathbf{u}^{*}(\mathbf{ \Omega}_{g},t), \tag{4}\]
where the domain \(\Omega\) is discretised on a grid \(\mathbf{\Omega}_{g}\subset\mathbb{R}^{N^{n}}\), which is uniform and structured in this study. We assume the mapping \(\mathbf{\eta}_{\mathbf{\theta}}\) to be a parametric function, which depends on the parameters \(\mathbf{\theta}\) that need to be found. In section SS3, we describe the parametric function that we employ in this study. Figure 1 provides an overview of the systematic error removal problem, which is applied to a turbulent flow. The corrupted field \(\mathbf{\zeta}(\mathbf{\Omega}_{\mathbf{g}},t)\) is passed as input to the appropriately trained model \(\mathbf{\eta}_{\mathbf{\theta}}\), which allows us to uncover the underlying solution \(\mathbf{u}^{*}\) to the partial differential equation, and, as a byproduct, the additive systematic error \(\mathbf{\phi}\).
The linear vs. nonlinear nature of the partial differential equations has a computational implication. For a linear partial differential equation, the residual is an explicit function of the systematic error, i.e., \(\mathcal{R}(\mathbf{\zeta}(\mathbf{x},t);\lambda)=\mathcal{R}(\mathbf{\phi}(\mathbf{x},t);\lambda)\). This does not hold true for nonlinear systems, which makes the problem of removing systematic error more challenging. This is discussed in section SS5.
## 3 The Physics-constrained convolutional neural network
We propose the physics-constrained convolutional neural network (PC-CNN) for uncovering solutions of PDEs from corrupted data. Convolutional neural networks are suitable tools for structured data, which ensure shift invariance [21, 22]. When working with data from partial differential equations, the spatial structure is physically correlated with respect the characteristic spatial scale of the problem, for example, correlation lengths in turbulent flows, diffusive spatio-temporal scales in convection, among others [23]. By leveraging an architectural paradigm that exploits these spatial structures, we propose parameterised models that, when appropriately trained and tuned, can generalise to unseen data [24, 25]. We provide a brief summary of convolutional neural networks and refer the reader to [26] for a pedagogical explanation. A convolutional layer is responsible for the discrete mapping
\[\kappa:(\mathbf{w},\mathbf{b},\mathbf{x})\mapsto\mathbf{x}\ast\mathbf{w}+\mathbf{b}, \tag{5}\]
where \(\mathbf{w}\in\mathbb{R}^{k^{d}\times c_{o}}\) represents a trainable kernel; \(\mathbf{b}\in\mathbb{R}^{c_{o}}\) is an additive bias; \(\ast\) the convolution operation; and \(\mathbf{x}\in\mathbb{R}^{m^{4}\times c_{i}}\) is the input data. The number of parameters in the network is proportional to the dimensionality of the kernels, whose spatial extent is determined by \(k\), while the number of filters, or channels, is given by \(c_{o}\). The dimensionality of the input is independent of the kernel and bias, with \(c_{i}\) determining the number of input channels. The trainable kernel \(\kappa\) operates locally around each pixel, which leverages information from the surrounding grid cells. As such, convolutional layers are an excellent choice for learning and exploiting local spatial correlations [21, 27, 28]. Each channel in the kernel is responsible for extracting different features present in the input data. This determines the degree to which an arbitrary function can be learned, as per the universal approximation theorem [15]. A convolutional neural network is an architecture that leverages multiple convolutions, which can be viewed as a composition of multiple layers where \(Q\) is the number of layers
\[\mathbf{\eta}_{\mathbf{\theta}}=\mathbf{\eta}_{\mathbf{\theta}_{Q}}^{Q}\circ\cdots\circ h(\bm {\eta}_{\mathbf{\theta}_{1}}^{1})\circ h(\mathbf{\eta}_{\mathbf{\theta}_{0}}^{0}). \tag{6}\]
In the most basic form, each layer \(\mathbf{\eta}_{\mathbf{\theta}_{i}}^{i}\) is a convolution \(\kappa\) followed by an element-wise nonlinear activation function \(h\). In the absence of nonlinearities, a network is capable only of learning a linear transformation, which restricts the function approximation space [29]. The introduction of these nonlinear activations is a key ingredient to the expressivity of the network. The final layer \(\mathbf{\eta}_{\mathbf{\theta}_{Q}}^{Q}\) is exempt from activation as this limits the expressive power of the output. There are a number of modifications that can be made to each layer, we refer the reader to [30] for a more general treatment of the topic. As the filter is convolved over each discrete pixel, the operation on large spatial domains may become computationally expensive due to the \(\mathcal{O}(k^{2})\) scaling. To mitigate the increased computational expense, we choose a kernel size \(k=3\) for each convolutional layer, which is a common choice for CNN architectures [30].
Figure 1: Removal of systematic error from data. The model \(\mathbf{\eta}_{\mathbf{\theta}}\) is responsible for recovering the underlying solution, \(\mathbf{u}^{\ast}(\mathbf{\Omega}_{g},t)\), from the corrupted data, \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\). The systematic error, \(\mathbf{\phi}(\mathbf{x})\), is the difference between the corrupted data and the underlying solution.
The basic architecture is extended through the use of pooling and upsampling layers to vary the spatial dimensionality of the input to sequential convolutional layers. In this work, we employ mean pooling and bilinear upsampling with a kernel size of \(k=2\) in each case. Varying the dimensionality of the input to the layers aids with feature extraction and provides another means to reduce the computational expense of the network - each filter \(\kappa\) is convolved over a smaller spatial domain. Figure 2 depicts the architecture used for the systematic error removal task, which highlights the operations employed with the dimensionality at each layer. The corrupted observations \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\) are passed as input to the network, with the outputs estimating the predicted true-state, \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\).
### Constraining the physics
The convolutional neural network, \(\mathbf{\eta}_{\mathbf{\theta}}\), defines a nonlinear parametric mapping, whose parameters, \(\mathbf{\theta}\), are found by training. The information constrained in the training originates from two sources. First the data, which may be corrupted; and second, the prior knowledge on the physical system, which is encoded in partial differential equations. The training is an minimisation problem of a cost functional \(\mathcal{L}_{\mathbf{\theta}}\), which reads
\[\mathbf{\theta}^{*} =\operatorname*{argmin}_{\mathbf{\theta}}\mathcal{L}_{\mathbf{\theta}}, \tag{7}\] \[\mathcal{L}_{\mathbf{\theta}} =\mathcal{L}_{\mathcal{R}}+\alpha\left(\mathcal{L}_{\partial\mathbf{ \Omega}}+\mathcal{L}_{\mathbf{\phi}}\right),\]
where \(\alpha\) is a fixed, empirical weighting factor, which determines the relative importance of the loss terms. Given corrupted-data \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\) at times \(t\in\mathcal{T}\subseteq[0,T]\), we define each of the loss terms in Eq. 7 as
\[\mathcal{L}_{\mathcal{R}} =\frac{1}{N_{\mathcal{T}}}\sum_{t\in\mathcal{T}}\left\|\mathcal{R }\big{(}\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t));\lambda\big{)} \right\|_{\mathbf{\Omega}_{g}}^{2}, \tag{8}\] \[\mathcal{L}_{\partial\mathbf{\Omega}} =\frac{1}{N_{\mathcal{T}}}\sum_{t\in\mathcal{T}}\left\|\mathbf{\eta}_{ \mathbf{\theta}}(\mathbf{\zeta}(\partial\mathbf{\Omega}_{g},t))-\mathbf{u}(\partial\mathbf{\Omega} _{g},t)\right\|_{\partial\mathbf{\Omega}_{g}}^{2},\] (9) \[\mathcal{L}_{\mathbf{\phi}} =\frac{1}{N_{\mathcal{T}}}\sum_{t\in\mathcal{T}}\left\|\partial_{ t}\big{[}\mathbf{\zeta}(\mathbf{\Omega}_{g},t)-\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{ \Omega}_{g},t))\big{]}\right\|_{\mathbf{\Omega}_{g}}^{2}, \tag{10}\]
Figure 2: Diagram of network architecture and training pipeline. In the network architecture \(\mathbf{\eta}_{\mathbf{\theta}}\): yellow layers denote convolution operations; red layers denote mean-pooling; blue layers denote bi-linear interpolation. Composite losses \(\mathcal{L}_{\mathcal{R}},\mathcal{L}_{\partial\mathbf{\Omega}},\mathcal{L}_{\mathbf{ \phi}}\) are computed on the output field and inferred systematic error. These are then combined to compute the training loss \(\mathcal{L}_{\mathbf{\theta}}\), which is used to update weights \(\mathbf{\theta}\) of the network. We provide an in-depth explanation of these losses in section §3.1.
where \(N_{\mathcal{T}}\) is the number of time steps; \(\partial\mathbf{\Omega}_{g}\) denotes boundary points of the grid, and \(\|\cdot\|_{\mathbf{\Omega}_{g}}\) is the \(\ell^{2}\)-norm over the given domain. The terms \(\mathcal{L}_{\mathcal{R}},\mathcal{L}_{\partial\mathbf{\Omega}},\mathcal{L}_{\phi}\) denote the residual loss, boundary loss, and systematic error loss, respectively.
In order to find an optimal set of parameters \(\mathbf{\theta}^{*}\), the loss is designed to penalise, or regularise, predictions, which do not conform to the desired output. We impose prior-knowledge of the dynamical system using the residual-based loss \(\mathcal{L}_{\mathcal{R}}\), promoting parameters that yield predictions that conform to the governing equations. Successful minimisation of the residual-based loss ensures that network realisations satisfy the residual of the system, as shown in Eq. (1). The magnitude of parameter updates \(\nicefrac{{\partial\mathbf{\mathcal{L}}\mathbf{\phi}}}{{\partial\mathbf{\theta}}}\) is proportional to the magnitude of the loss, which ensures large violations of the residual are heavily regularised.
The residual-based loss alone is insufficient for obtaining the desired mapping. In the absence of collocation points the mapping \(\mathbf{\eta}_{\mathbf{\theta}}\) provides a non-unique mapping: any prediction that satisfies the governing equations is valid. To avoid this we impose knowledge of the ground truth solution at select collocation points, chosen to lie on the boundaries of the domain \(\partial\mathbf{\Omega}\). The data-driven boundary loss \(\mathcal{L}_{\partial\mathbf{\Omega}}\) is designed to minimise the error between predictions and measurements at the chosen collocation points. This has the effect of restricting the function approximation space to provide a unique mapping and ensuring that the predictions satisfy the observations.
Inclusion of the systematic-error-based loss \(\mathcal{L}_{\mathbf{\phi}}\) embeds our assumption of stationary systematic error \(\mathbf{\phi}=\mathbf{\phi}(x)\). Penalising network realisations that do not provide a unique systematic error helps stabilise training and drive predictions away from trivial solutions, such as the stationary solution \(\mathbf{u}(\mathbf{\Omega}_{g},t)=0\). Consequently, this eliminates challenging local minima in the loss surface, which improves convergence of the optimisation approach shown in Eq. (7).
### Time-batching
The computation of the residual of partial differential equations in time is inherently temporal in nature, as can be seen in Eq. 1. Convolutional neural networks do not consider the sequentiality of the data in their architecture. In this paper, we propose time-batching the data to compute the time derivative \(\partial_{t}\mathbf{u}\), which is necessary in the residual computation. We consider each sample passed to the network to constitute a window of \(\tau\) sequential timesteps. This allows both computing predictions in parallel and sharing weights, which decreases the computational cost of the problem. Employing a time windowing approach provides a number of benefits, i.e., it allows us to process non-contiguous elements of the timeseries, and sample different points in the temporal domain. We provide an explicit formulation for the residual-based loss
\[\mathcal{L}_{\mathcal{R}}=\frac{1}{\tau N_{\mathcal{T}}}\sum_{t\in\mathcal{T} }\sum_{n=0}^{\tau}\left\|\partial_{t}\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\Omega}_ {g},t+n\Delta t))-\mathcal{N}(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{ \Omega}_{g},t+n\Delta t));\lambda)\right\|_{\mathbf{\Omega}_{g}}^{2}, \tag{11}\]
in which the time-derivative is computed using the explicit forward-Euler method over predictions at subsequent timesteps, and the differential operator is evaluated at each timestep. In so doing, we are able to obtain the residual for the predictions in a temporally local sense; rather than computing the residual across the entire time-domain, we are able to compute the residual over each time window. This reduces the computational cost and increases the number of independent batches provided to the network.
We augment the other loss terms to operate in the same manner, operating over the time window as well as the conventional batch dimension. While the time-independent boundary loss \(\mathcal{L}_{\partial\mathbf{\Omega}}\) need not leverage the connection between adjacent timesteps, the systematic error loss \(\mathcal{L}_{\mathbf{\phi}}\) computes the time-derivative in the same manner - evaluating the forward-Euler method between consecutive timesteps.
#### 3.2.1 Choice of the time window
The choice of the time window \(\tau\) is important. For a fixed quantity of training data, we need to consider the trade-off between computing the residual over longer time windows, and sampling a larger number of windows from within the time domain. If we consider \(N=\tau N_{\mathcal{T}}\) training samples, a larger value of \(\tau\) corresponds to fewer time windows. Limiting the number of time windows used for training has an adverse effect on the ability of the model to generalise; the information content of contiguous timesteps is scarcer than taking timesteps uniformly across the time domain. A further consideration is that, although evaluating the residual across large time windows promotes numerical stability, this smooths the gradients computed in backpropagation, and makes the model more difficult to train, especially in the case of chaotic systems. Empirically, we choose \(\tau=2\), the minimum window size feasible for computing the residual-based loss \(\mathcal{L}_{\mathcal{R}}\). We find that this is sufficient for training the network whilst simultaneously maximising the number of independent samples used for training. To avoid duplication of data in the training set, we ensure that all samples are at least \(\tau\) timesteps apart so that independent time windows are guaranteed to not contain any overlaps.
## 4 Data generation
A high-fidelity numerical solver provides the datasets used throughout this paper. We utilise a differentiable pseudospectral spatial discretisation to solve the partial differential equations of this paper [31]. The solution is computed on on the spectral grid \(\mathbf{\hat{\Omega}}_{k}\in\mathbb{Z}^{K^{n}}\). This spectral discretisation spans the spatial domain \(\Omega\in[0,2\pi)\subset\mathbb{R}^{2}\), enforcing periodic boundary conditions on \(\partial\Omega\). A solution is produced by time-integration of the dynamical system with the explicit forward-Euler scheme, in which we choose the timestep \(\Delta t\) to satisfy the Courant-Friedrichs-Lewy (CFL) condition [32]. The initial conditions in each case are generated using the equation
\[\mathbf{u}^{*}(\mathbf{\hat{\Omega}}_{k},0)=\frac{\iota e^{2\pi i\mathbf{\epsilon}}}{ \sigma\sqrt{2\pi}}e^{-\frac{1}{2}\left(\frac{|\mathbf{\hat{\Omega}}_{k}|}{ \sigma}\right)^{2}}\qquad\text{with}\quad\mathbf{\epsilon}_{i}\sim N(0,1), \tag{12}\]
where \(\iota\) denotes the magnitude; \(\sigma\) denotes the standard deviation; and \(\mathbf{\epsilon}_{i}\) is a sample from a unit normal distribution. Equation 12 produces a pseudo-random field scaled by the wavenumber \(\mathbf{\hat{\Omega}}_{k}\), which ensures that the resultant field has spatial structures of varying lengthscale. We take \(\iota=10,\sigma=1.2\) for all simulations in order to provide an initial condition, which provides numerical stability.
As a consequence of the Nyquist-Shannon sampling criterion [32], the resolution of the spectral grid \(\mathbf{\hat{\Omega}}_{k}\in\mathbb{Z}^{K^{n}}\) places a lower bound on the spatial resolution. For a signal containing a maximal frequency \(\omega_{\text{max}}\), the sampling frequency \(\omega_{s}\) must satisfy the condition \(\omega_{\text{max}}<\nicefrac{{\omega_{s}}}{{2}}\), therefore, we ensure that the spectral resolution satisfies \(K<\nicefrac{{N}}{{2}}\). Violation of this condition induces spectral aliasing, in which energy content from frequencies exceeding the Nyquist limit \(\nicefrac{{\omega_{s}}}{{2}}\) is fed back to the low frequencies, which amplifies energy content unphysically. To prevent aliasing, we employ a spectral grid \(\mathbf{\hat{\Omega}}_{k}\in\mathbb{Z}^{32\times 32}\), sampling on the physical grid \(\mathbf{\Omega}_{g}\in\mathbb{R}^{64\times 64}\). Approaching the Nyquist limit allows us to resolve the smallest turbulent structures possible without introducing spectral aliasing.
The pseudospectral discretisation [31] provides an efficient means to compute the differential operator \(\mathcal{N}\). This allows us to accurately evaluate the residual-based loss \(\mathcal{L}_{\mathcal{R}}\) in the Fourier domain
\[\mathcal{L}_{\mathcal{R}}=\frac{1}{\tau N_{\mathcal{T}}}\sum_{t\in\mathcal{T} }\sum_{n=0}^{\tau}\lVert\partial_{t}\mathbf{\hat{\eta_{\theta}}}(\mathbf{\zeta}( \mathbf{\Omega}_{g},t+n\Delta t))-\hat{\mathcal{N}}(\mathbf{\hat{\eta_{ \theta}}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t+n\Delta t)))\rVert^{2}_{\mathbf{ \hat{\Omega}}_{k}}, \tag{13}\]
in which \(\mathbf{\hat{\eta_{\theta}}}=\mathcal{F}\circ\mathbf{\eta_{\theta}}\) where \(\mathcal{F}\) is the Fourier operator, and \(\hat{\mathcal{N}}\) denotes the Fourier-transformed differential operator. Computing the loss \(\mathcal{L}_{\mathcal{R}}\) in the Fourier domain provides two advantages: (i) periodic boundary
conditions are enforced automatically, which enables us to embed prior knowledge in the loss calculations; and (ii) gradient calculations have spectral accuracy [32]. In contrast, a conventional finite differences approach requires a computational stencil, the spatial extent of which places an error bound on the gradient computation. This error bound itself a function of the spatial resolution of the field.
## 5 Results
In this section, we first introduce the mathematical parameterisation of the systematic error and provide an overview of the numerical details, data and performance metrics used in the study. Second, we demonstrate the ability of the model to recover the underlying solution for three partial differential equations, and analyse the robustness and accuracy. We consider three partial differential equations of increasing complexity: the linear convection-diffusion, nonlinear convection-diffusion (Burgers' equation [33, 34]), and two-dimensional turbulent Navier-Stokes equations (the Kolmogorov flow [35]). Finally, we analyse the physical consistency of physics-constrained convolutional neural network (PC-CNN) predictions for the two-dimensional turbulent flow. These analyses show the method, up to challenging partial differential equations exhibiting nonlinear, chaotic behaviour.
### Parameterisation of the systematic error
The systematic error is defined with the modified Rastrigin parameterisation [19], which is commonly used in non-convex optimization benchmarks. The Rastrigin parameterisation by both frequency and magnitude allows us to assess the performance for a range of corrupted fields, which have multi-modal features. Multi-modality is a significant challenge for uncovering the solutions of partial differential equations from corrupted data. We parameterise the systematic error as
\[\mathbf{\phi}(\mathbf{x};\mathcal{M},k_{\phi},u_{\max})=\frac{\mathcal{M}u_{\max}}{2 \pi^{2}+40}\left(20+\sum_{i=1}^{2}\left[(\mathbf{x}_{i}-\pi)^{2}-10\cos(k_{\phi}( \mathbf{x}_{i}-\pi))\right]\right) \tag{14}\]
where \(\mathcal{M}\) is the relative magnitude of systematic error; \(u_{\max}\) is the maximum velocity observed in the flow; and \(k_{\phi}\) is the Rastrigin wavenumber of the systematic error. Parameterising the systematic error in this manner allows us to evaluate the performance of the methodology with respect to the magnitude and modality of the error.
### Numerical details, data, and performance metrics
We uncover the solutions of partial differential equations from corrupted data for three physical systems of increasing complexity. Each systems is solved via the pseudospectral discretisation as described in section SS4, producing a solution by time-integration using the Euler-forward method with timestep of \(\Delta t=5\times 10^{-3}\). The time step is chosen to satisfy of the Courant-Friedrichs-Lewy (CFL) condition to promote numerical stability. The model training is performed with 1024 training samples and 256 validation samples, which are pseudo-randomly selected from the time-domain of the solution. The _adam_ optimiser is employed [36] with a learning rate of \(3\times 10^{-4}\). The weighting factor for the loss \(\mathcal{L}_{\mathbf{\theta}}\) (Eq. 7) is \(\alpha=10^{3}\), which is empirically determines to provide stable training. Each model is trained for a total of \(10^{4}\) epochs, which is chosen to provide sufficient convergence. The accuracy of prediction is quantified by the relative error on the validation dataset
\[e=\sqrt{\frac{\sum_{t\in\mathcal{T}}\lVert\mathbf{u}^{*}(\mathbf{\Omega}_{\mathbf{g}},t)- \mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{\mathbf{g}},t))\rVert_{\mathbf{\Omega }_{\mathbf{g}}}^{2}}{\sum_{t\in\mathcal{T}}\lVert\mathbf{u}^{*}(\mathbf{\Omega}_{\mathbf{g}}, t)\rVert_{\mathbf{\Omega}_{\mathbf{g}}}^{2}}}. \tag{15}\]
This metric takes the magnitude of the solution into account, which allows the results from different systems to be compared. All experiments are run on a single Quadro RTX 8000 GPU.
### Uncovering solutions from corrupted data
In this section, we uncover solutions from corrupted data with testcases on the linear convection-diffusion equation, the Burgers' equation, and the two-dimensional turbulent Navier-Stokes flow.
#### 5.3.1 The linear convection-diffusion equation
The linear convection-diffusion equation is used to describe a variety of physical phenomena [37]. The equation is defined as
\[\partial_{t}\mathbf{u}+\mathbf{c}\cdot\nabla\mathbf{u}=\nu\Delta\mathbf{u}, \tag{16}\]
where \(\nabla\) is the nabla operator; \(\Delta\) is the Laplacian operator; \(\mathbf{c}\equiv(c,c)\), where \(c\) is the convective coefficient; and \(\nu\) is the diffusion coefficient. The dissipative nature of the flow is further exacerbated by the presence of periodic boundary conditions. The flow energy is subject to rapid decay because the solution quickly converge towards a fixed-point solution at \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)=0\). In the presence of the fixed-point solution, we observe \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)=\mathbf{\phi}(\mathbf{\Omega}_{g})\), which is a trivial case for identification and removal of the systematic error. In order to avoid rapid convergence to the fixed-point solution, we take \(c=1.0,\nu=\nicefrac{{1}}{{500}}\) as the coefficients, producing a convective-dominant solution.
A snapshot of the results for \(k_{\phi}=3,\mathcal{M}=0.5\) is shown in Figure 3a. There is a marked difference between the corrupted observations \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\) in panel (i) and the underlying solution \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)\) in panel (ii), most notably in the magnitude of the field. Network predictions in panel (iii) uncover the underlying solution to the partial differential equation with a relative error of \(e=6.612\times 10^{-2}\) on the validation set.
#### 5.3.2 Nonlinear convection-diffusion equation
The nonlinear convection-diffusion equation, also known as Burgers' equation [34], is
\[\partial_{t}\mathbf{u}+\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}=\nu\Delta\mathbf{u}, \tag{17}\]
where the nonlinearity is in the convective term \(\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}\), the cause of turbulence in the Navier-Stokes equations. The nonlinear convective term provides a further challenge: below a certain threshold, the velocity interactions lead to further energy decay. The kinematic viscosity is set to \(\nu=\nicefrac{{1}}{{500}}\), which produces a convective-dominant solution by time integration.
In contrast to the linear convection-diffusion system, the relationship between the dynamics of corrupted-state and the true-state is more challenging. The introduction of nonlinearities in the differential operator revoke the linear relationship between the systematic error and observed state, i.e., \(\mathcal{R}(\mathbf{\zeta}(\mathbf{x},t);\lambda)\neq\mathcal{R}(\mathbf{\phi}(\mathbf{x},t);\lambda)\) as discussed in section SS2. Consequently, this increases the complexity of the residual-based loss term \(\mathcal{L}_{\mathcal{R}}\). Figure 3b shows a snapshot of results for \(k_{\phi}=5,\mathcal{M}=0.5\). The underlying solution \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)\) of the partial differential equation contains high-frequency spatial structures, shown in panel (ii), which are less prominent in the corrupted data \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\), shown in panel (i). Despite the introduction of a nonlinear differential operator, we demonstrate that the network retains the ability to recover the underlying solution, with no notable difference in the structure between the predicted and ground-truth fields, shown in panels (iv) and (ii), respectively. The relative error on the validation set is \(e=6.791\times 10^{-2}\).
#### 5.3.3 Two-dimensional turbulent flow
We consider a chaotic flow, which is governed by the incompressible Navier-Stokes equations evaluated on a two-dimensional domain with periodic boundary conditions and a periodic forcing term. This flow is also knows as the Kolmogorov flow [35]. The equations are expressions of the mass and momentum conservation, respectively
\[\begin{split}\nabla\cdot\mathbf{u}&=0,\\ \partial_{t}\mathbf{u}+\left(\mathbf{u}\cdot\nabla\right)\mathbf{u}& =-\nabla p+\nu\Delta\mathbf{u}+\mathbf{g}.\end{split} \tag{18}\]
where \(p\) is the scalar pressure field; \(\nu\) is the kinematic viscosity; and \(\mathbf{g}\) is a body forcing, which enables the dynamics to be sustained by ensuring that the flow energy does not dissipate at regime. The flow density is constant and assumed to be unity.
The use of a pseudospectral discretisation allows us to eliminate the pressure term and handle the continuity constraint implicitly, as shown in the spectral representation of the two-dimensional turbulent flow
\[\begin{split}\left(\frac{d}{dt}+\nu|\mathbf{k}|^{2}\right)\mathbf{\hat{u} }_{k}&=\mathbf{\hat{f}}_{k}-\mathbf{k}\frac{\mathbf{k}\cdot\mathbf{\hat{f}}_{k}}{ |\mathbf{k}|^{2}}+\mathbf{\hat{g}}_{k}\qquad\text{with}\quad\mathbf{\hat{f}}_{k}=-\left( \widehat{\mathbf{u}\cdot\nabla\mathbf{u}}\right)_{k},\\ \mathcal{R}(\mathbf{\hat{u}}_{k};\lambda)&=\left(\frac{ d}{dt}+\nu|\mathbf{k}|^{2}\right)\mathbf{\hat{u}}_{k}-\mathbf{\hat{f}}_{k}+\mathbf{k}\frac{\mathbf{k} \cdot\mathbf{\hat{f}}_{k}}{|\mathbf{k}|^{2}}-\mathbf{\hat{g}}_{k},\end{split} \tag{19}\]
where nonlinear terms are handled in the pseudospectral sense, employing the standard \(\nicefrac{{2}}{{3}}\) de-aliasing rule to avoid un-physical culmination of energy at the high frequencies [32]. As a result of implicit handling of the continuity constraint, we can use the residual-based loss as prescribed without imposing any further constraints for mass conservation.
The presence of the pressure gradient, \(\mathbf{\nabla}p\), and forcing term, \(\mathbf{g}=\mathbf{g}(\mathbf{x})\) induce chaotic dynamics in the form of turbulence for \(Re=\nicefrac{{1}}{{\nu}}>40\). We take \(\nu=\nicefrac{{1}}{{42}}\) to ensure chaotic dynamics, imposing constant periodic forcing with \(\mathbf{g}(\mathbf{x})=(\sin{(4\mathbf{x_{2}})},0)\). In order to remove the transient and focus on a statistically stationary regime, the transient up to \(T_{t}=180.0\) is not recorded in the data.
A snapshot for the two-dimensional turbulent flow case is shown in Figure 3c for \(k_{\phi}=7,\mathcal{M}=0.5\). The corrupted field \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\) as shown in panel (i) contains prominent, high-frequency spatial structures not present in the underlying solution \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)\) shown in panel (ii). The corrupted field bares little resemblance to the true solution. In spite of the chaotic dynamics of the system, we demonstrate that the network \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\) shown in panel (iv) successfully removes the systematic error. The relative error on the validation set is \(e=2.044\times 10^{-2}\).
Figure 3: Snapshots of uncovered solutions from corrupted data. Panel (i) shows the corrupted field; (ii) shows the ground-truth solution; (iii) shows the true systematic error; and (iv) shows the network predictions; (v) shows the predicted systematic error.
### Robustness
In this section, we analyse the robustness of the methodology to systematic errors of varying modality and magnitude. The systematic error, \(\mathbf{\phi}(\mathbf{\Omega}_{g})\), is varied for a range of Rastrigin wavenumbers \(k_{\phi}\) and relative magnitudes \(\mathcal{M}\). These parameter ranges allow us to assess the degree to which the underlying solution of a partial differential equation can be recovered when subjected to spatially varying systematic error with different degrees of multi-modality and magnitudes. To this end, we propose two parametric studies for each partial differential equation
\[(i)\quad\mathcal{M}=0.5,\quad k_{\phi}\in\{1,3,5,7,9\};\] \[(ii)\quad k_{\phi}=3,\quad\quad\mathcal{M}\in\{0.01,0.1,0.25,0.5,1.0\}.\]
In the first case, we fix the relative magnitude and vary the Rastrigin wavenumber, whereas in the second case we fix the Rastrigin wavenumber and vary the relative magnitude. For each case, we compute the relative error, \(e\), between the predicted solution \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\) and the true solution \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)\), as defined in Eq. (15). We show the mean error for five repeats of each experiment to ensure results are representative of the true performance. Empirically, we find that performance is robust to pseudo-random initialisation of network parameters with standard deviation of \(\mathcal{O}(10^{-4})\).
Computational resource is fixed for each run, which employs the same experimental setup described in section SS5.2. Whilst the optimal hyperparameters are likely to vary on a case-by-case basis, providing a common baseline allows for explicit comparison between cases. Assessing results using the same hyperparameters for the three partial differential equations allows us to explore the robustness of the methodology.
In the case of systematic error removal for the linear convection-diffusion problem, shown in Figure 3(a), we demonstrate that the relative error is largely independent of the form of parameterised systematic error. The relative magnitude \(\mathcal{M}\) and Rastrigin wavenumber \(k_{\phi}\) have little impact on the ability of the model to uncover the true solution to the partial differential equation, with performance consistent across all cases. The model performs best for the case \(k_{\phi}=7,\mathcal{M}=0.5\), achieving a relative error of \(2.568\times 10^{-1}\).
Results for the nonlinear convection-diffusion case show marked improvement in comparison with the linear case, with errors for the numerical study shown in Figure 3(b). Despite the introduction of a nonlinear operator, the relative error is consistently lower regardless of the parameterisation of systematic error. The network remains equally capable of uncovering the underlying solution across a wide range of modalities and magnitudes.
Although the residual-error from the network predictions \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\) no longer scales in a linear fashion, this is beneficial for the training dynamics. This is because the second order nonlinearity promotes convexity in the loss-surface, which is then exploited by our gradient-based optimisation approach.
Figure 4: Robustness analysis. Results for the parameterised studies for each of the three partial differential equations. Orange-bars denote results for case (_i_): fixing the magnitude and varying the Rastrigin wavenumber. Blue-bars denote results for case (_ii_): fixing the Rastrigin wavenumber and varying the magnitude.
Physical consistency of uncovered Navier-Stokes solutions
Results in section SS5.4 show that, for a generic training setup, we are able to achieve low values of the relative error regardless of modality or magnitude of systematic error. Because the two-dimensional turbulent flow case is chaotic, infinitesimal errors \(\mathcal{O}(\epsilon)\) exponentially grow in time, therefore, the residual \(\mathcal{R}(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)+\epsilon(\mathbf{\Omega}_{g},t);\lambda) \gg\mathcal{R}(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t);\lambda)\) where \(\epsilon\) is a perturbation parameter. In this section, we analyse the physical properties of the solutions of the Navier-Stokes equation (Eq. 18) uncovered by the PC-CNN, \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)))\).
First, we show snapshots of the time evolution of the two-dimensional turbulent flow in Figure 5. These confirms that the model is learning a physical solution across the time-domain of the solution, as shown in the error on a log-scale in the final column. The parameterisation of the systematic error is fixed in this case with \(k_{\phi}=7,\mathcal{M}=0.5\).
Second, we analyse the statistics of the solution. The mean kinetic energy of the flow at each timestep is directly affected by the introduction of the systematic error. In the case of our strictly positive systematic error \(\mathbf{\phi}(\mathbf{\Omega}_{g})\), the mean kinetic energy is increased at every point. Results in Figure 6 show the kinetic energy time series across the time-domain for (i) the ground truth \(\mathbf{u}^{*}(\mathbf{\Omega}_{g},t)\); the corrupted data \(\mathbf{\zeta}(\mathbf{\Omega}_{g},t)\); and the network's predictions \(\mathbf{\eta}_{\mathbf{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\). The chaotic nature of the solution can be observed from the spikes in chaotic energy. We observe that the network predictions accurately reconstruct the kinetic energy trace across the simulation domain, despite fewer than \(2\%\) of the available timesteps being used for training. Third, we analyse the energy spectrum, which is characteristic of turbulent flows, in which the energy content decreases with the wavenumber. Introducing the systematic error at a particular wavenumber artificially alters the energy spectrum due to increased energy content. In Figure 7, we show the energy spectrum for the two-dimensional turbulent flow, where this unphysical spike in energy content is visible for \(|\mathbf{k}|\geq 7\). Model predictions
Figure 5: Systematic error removal for the two-dimensional turbulent flow [\(k_{\phi}=7,\mathcal{M}=0.5\)]. \(T_{t}\) denotes the length of the transient.
\(\mathbf{\eta_{\theta}}(\mathbf{\zeta}(\mathbf{\Omega}_{g},t))\) correct the energy content for \(|\mathbf{k}|<21\), and successfully characterise and reproduce scales of turbulence. The exponential decay of energy content with increasing spatial frequency provides a significant challenge for the residual-based loss. Despite the challenge, we observe only minor discrepancy from the true solution at large wavenumbers, i.e., when aliasing occurs (\(|\mathbf{k}|\gtrapprox 29\)).
The ability of the network \(\mathbf{\eta_{\theta}}\) to recover the underlying solution from corrupted data is demonstrated to yield low-error when compared with the true underlying solution of the partial differential equation. By investigating properties and statistics of the solution we demonstrate that, even for turbulent nonlinear systems, we are able to produce predictions that adhere to the physical properties of the system.
## 7 Conclusion
Data and models can be corrupted by systematic errors, which may originate from miscalibration of the experimental apparatus and epistemic uncertainties. We introduce a methodology to remove the systematic error from data to uncover the physical solution of the partial differential equation. First, we
Figure 6: Kinetic energy for the two-dimensional turbulent flow [\(k_{\phi}=7,\mathcal{M}=0.5\)].
Figure 7: Energy Spectrum for the two-dimensional turbulent flow [\(k_{\phi}=7,\mathcal{M}=0.5\)].
introduce the physics-constrained convolutional neural network (PC-CNN), which provides the means to compute the physical residual, i.e., the areas of the field in which the physical laws are violated because of the systematic error. Second, we formulate an optimisation problem by leveraging prior knowledge of the underlying physical system to regularise the predictions from the PC-CNN. Third, we numerically test the method to remove systematic errors from data produced by three partial differential equations of increasing complexity (linear and nonlinear diffusion-convection, and Navier-Stokes). This shows the ability of the PC-CNN to accurately recover the underlying solutions. Fourth, we carry out a parameterised analysis, which successfully minimises the relative error for a variety of systematic errors with different magnitudes and degrees of multimodality. This shows that the method is robust. Finally, we investigate the physical consistency of the inferred solutions for the two-dimensional turbulent flow. The network predictions satisfy physical properties of the underlying partial differential equation (Navier-Stokes), such as the statistics and energy spectrum. This work opens opportunities for inferring solutions of partial differential equations from sparse measurements. Current work is focused on scaling up the framework to three-dimensional flows, and dealing with experimental data.
## Acknowledgements
D. Kelshaw. and L. Magri. acknowledge support from the UK Engineering and Physical Sciences Research Council. L. Magri gratefully acknowledges financial support from the ERC Starting Grant PhyCo 949388.
|
2307.11130 | Frequency-aware optical coherence tomography image super-resolution via
conditional generative adversarial neural network | Optical coherence tomography (OCT) has stimulated a wide range of medical
image-based diagnosis and treatment in fields such as cardiology and
ophthalmology. Such applications can be further facilitated by deep
learning-based super-resolution technology, which improves the capability of
resolving morphological structures. However, existing deep learning-based
method only focuses on spatial distribution and disregard frequency fidelity in
image reconstruction, leading to a frequency bias. To overcome this limitation,
we propose a frequency-aware super-resolution framework that integrates three
critical frequency-based modules (i.e., frequency transformation, frequency
skip connection, and frequency alignment) and frequency-based loss function
into a conditional generative adversarial network (cGAN). We conducted a
large-scale quantitative study from an existing coronary OCT dataset to
demonstrate the superiority of our proposed framework over existing deep
learning frameworks. In addition, we confirmed the generalizability of our
framework by applying it to fish corneal images and rat retinal images,
demonstrating its capability to super-resolve morphological details in eye
imaging. | Xueshen Li, Zhenxing Dong, Hongshan Liu, Jennifer J. Kang-Mieler, Yuye Ling, Yu Gan | 2023-07-20T16:07:02Z | http://arxiv.org/abs/2307.11130v1 | Frequency-aware optical coherence tomography image super-resolution via conditional generative adversarial neural network
###### Abstract
Optical coherence tomography (OCT) has stimulated a wide range of medical image-based diagnosis and treatment in fields such as cardiology and ophthalmology. Such applications can be further facilitated by deep learning-based super-resolution technology, which improves the capability of resolving morphological structures. However, existing deep learning-based method only focuses on spatial distribution and disregard frequency fidelity in image reconstruction, leading to a frequency bias. To overcome this limitation, we propose a frequency-aware super-resolution framework that integrates three critical frequency-based modules (i.e., frequency transformation, frequency skip connection, and frequency alignment) and frequency-based loss function into a conditional generative adversarial network (cGAN). We conducted a large-scale quantitative study from an existing coronary OCT dataset to demonstrate the superiority of our proposed framework over existing deep learning frameworks. In addition, we confirmed the generalizability of our framework by applying it to fish corneal images and rat retinal images, demonstrating its capability to super-resolve morphological details in eye imaging.
## 1 Introduction
Optical coherence tomography (OCT) is a non-invasive imaging modality that utilizes infrared interferometry to generate depth-resolved reflectivity profile in real-time [1]. Over the last decades, OCT has stimulated a wide range of medical image-based diagnosis and treatment [2, 3, 4]. For example, in cardiology, OCT is considered a suitable coronary imaging modality to assess plaques to ensure successful stent deployment [5]. Meanwhile, in ophthalmology, OCT has become one of the prominent diagnostic tools for keratocons [6], glaucoma [7], age-related macular degeneration [8], retinopathy [9], and diabetic retinopathy and diabetic macular edemas [10] in identifying layers in both anterior and posterior segments of the eye.
In both coronary imaging and eye imaging, high spatial resolutions from OCT, mostly spectral domain OCT (SDOCT), are crucial to facilitate the application to either identify endothelial cells or assess the thickness of corneal layers and retinal layers. However, such high resolution comes at the cost of demanding optical design and data transmission/storage. Improvement of resolution via upgrading light sources and other hardware designs is resource-intensive. The hardware improvement also suffers from jittering and motion artifacts caused by sparse sampling. On the contrary, software-based method could bypass the hardware upgrading issue and achieve high image quality through computational methods.
In the realm of algorithmic super-resolution (SR), various digital signal processing and image processing methods, have been developed to generate high-resolution (HR) OCT images from low-resolution (LR) OCT scanning which is undersampled in spectral or/and spatial domain. Conventionally, deconvolution [11, 12], spectrum-shaping [13], and spectral estimation [14] have been proposed to optimize OCT images. However, the SR performance is recently boosted by the introduction of deep learning (DL), especially the combination of convolution neural network
(CNN) and generative adversarial network (GAN).
Convolution neural network has been widely used for OCT image generation [15, 16, 17, 18, 19, 20, 21, 22] to enhance the image quality and denoise speckles [1, 17]. In particular, CNN models such as multi-scale residual network (MSRN), residual dense network (RDN), residual dense Unet (RDU), and Residual Channel Attention Network (RCAN) have been utilized and compared recently in generating SR OCT images [21]. Moreover, Conditional generative adversarial network (cGAN) has been incorporated in OCT SR research [23, 24, 25, 17], featuring a discriminator design to examine the fidelity of the generated SR image during the training process, thus enhancing the capability of generating HR images.
However, the current DL research in generating SR OCT images focuses solely on the spatial distribution of pixels in B-scans, without consideration of frequency information. The lack of frequency-awareness poses limitation on SR performance in two aspects. Firstly, from 1-D frequency perspective, SDOCT is physically measured in spectrum and reconstructed in spatial domain. Considering frequency information along axial direction would increase the fidelity of reconstruction. Secondly, from 2-D image processing perspective, current DL models exhibit spectral bias, which is a learning bias towards low-frequency components [26, 27]. As shown in Fig 1, DL algorithms induce frequency domain gaps in SR OCT images compared to the reference HR images, as they fail to resemble the high-frequency components, such as edges and textures of the coronary artery sample. High-frequency components preserve finer details that are beneficial for medical imaging [28]. Therefore, a DL framework with frequency awareness is
Figure 1: Frequency domain gaps between the HR and the SR OCT images generated by four CNN implementations (MSRN, RDN, RDU, RCAN). The spectrum is generated by performing Fourier transform on the B-scan image. The high-frequency components of images are generated by performing an inverse Fourier transform on the high-frequency parts of the spectrum. Compared to the HR image, SR images generated by existing CNN algorithms are biased to a limited spectrum region towards low-frequency. Using ours CNN algorithm with frequency awareness (ours), the spectrum and high-frequency components of the SR OCT image are closer to that of the original image. The scale bar represents 500\(\mu\)m.
needed to reduce spectral bias and generate high-quality SR OCT images.
To this end, we develop a deep learning framework that is capable of super-resolving LR OCT images with frequency awareness. In this manuscript, we propose a DL framework for OCT image SR task with frequency awareness, which is capable of restoring high-frequency information via model design and optimized loss function. We perform extensive experiments on an existing human coronary dataset and quantitatively demonstrate that the proposed frequency-aware DL framework super-resolves OCT images with superior quality and less frequency bias. We also validate the spectral bias of the existing DL algorithms used in generating SR OCT images. Furthermore, we perform qualitative analysis to confirm that our framework is capable of generating SR OCT images for corneal imaging and retinal imaging.
## 2 Methods
### Overall framework
The design of our frequency-aware framework is shown in Fig 2. Our framework consists of a generator (\(G\)) and a discriminator (\(D\)). Generator \(G\) translates a LR image to a SR image. Discriminator \(D\) classifies whether or not the generated image is realistic. Wavelet transformation is utilized to decompose feature maps \(F_{i}\) into different frequency components; frequency skip connection (FSC) is used to prevent the loss of high-frequency information; high-frequency alignment (HFA) is used to guide the G for generating frequency information [29].
### Model design
#### 2.2.1 Wavelet transformation
We adopt Haar wavelet for decomposing feature maps of the \(i\)-th layer \(F_{i}\) into different components. Haar wavelet consists of two mirror operations: wavelet pooling and wavelet unpooling. The wavelet pooling converts images into the wavelet domain, and the wavelet unpooling inversely
Figure 2: The design of the proposed frequency-aware framework for OCT image super-resolution. The proposed model utilizes wavelet transformation, frequency skip connection, and high-frequency alignment to facilitate frequency information for super-resolving OCT images.
reconstructs frequency components into the spatial domain. During wavelet pooling, \(F_{i}\) is convolved with four distinct filters: \(LL^{T}\), \(LH^{T}\), \(HL^{T}\), and \(HH^{T}\), where \(L\) and \(H\) are low and high pass filters respectively (\(L^{T}=\frac{1}{\sqrt{2}}[1,1]\), \(H^{T}=\frac{1}{\sqrt{2}}[-1,1]\)). The low pass filter (\(LL^{T}\)) provides general shapes and outlines in \(F_{i}\); the high pass filters (\(LH^{T}\), \(HL^{T}\), \(HH^{T}\)) provide more fine details such as segments, edges, and contrasts. An illustration of wavelet transformation is shown in Fig 2.
#### 2.2.2 Frequency skip connection
To prevent the loss of high-frequency information from \(F_{i}\) to \(F_{i+1}\), FSC is used in generator \(G\). The FSC in G is defined as:
\[F_{i+1}^{{}^{\prime}}=F_{i+1}+Unpooling\left(LL_{G}^{i},LH_{G}^{i},HL_{G}^{i}, HH_{G}^{i}\right) \tag{1}\]
After the frequency skip connection, feature map \(F_{i+1}^{{}^{\prime}}\) is calculated with the frequency information of \(F_{i}^{{}^{\prime}}\) is preserved through this process.
#### 2.2.3 High-frequency alignment
High-frequency alignment (HFA) provides \(G\) with a self-supervised learning scheme using frequency information acquired in \(D\). For \(F_{i}\) in G, we acquire \(LL_{G}^{i}\), \(LH_{G}^{i}\), \(HL_{G}^{i}\), and \(HH_{G}^{i}\). The combination of high-frequency components in \(G\) is defined by: \(HF_{G}^{i}=LH_{G}^{i}+HL_{G}^{i}+HH_{G}^{i}\). Similarly, high-frequency components in \(D\) can be acquired by \(HF_{D}^{i}=LH_{D}^{i}+HL_{D}^{i}+HH_{D}^{i}\). The \(HF_{D}^{i}\) can be used as the self-supervision constraint to train \(G\).
### Loss function
In the proposed frequency-aware framework, we incorporate a modified focal frequency loss (FFL) that quantifies the distance between HR and SR OCT images in the frequency domain [30]. The FFL is defined as:
\[FFL=\frac{1}{MN}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}w(u,v)|F_{SR}(u,v)-F_{HR}(u,v) |^{2} \tag{2}\]
The \(F_{SR}\) and \(F_{HR}\) denote the frequency representation of SR and HR OCT images acquired by Discrete Fourier transform (DFT); the M and N represent the image size; the \(w(u,v)\) is the spectrum weight matrix that is defined by:
\[w(u,v)=|F_{SR}(u,v)-F_{HR}(u,v)|^{\alpha} \tag{3}\]
where \(\alpha\) is the scaling factor for flexibility (set to 1 in our experiments). In [30], the \(F_{SR}\) and \(F_{HR}\) are acquired by 2D DFT. However, the OCT images are acquired by 1D A-line scanning. Thus, we modify the FFL by acquiring \(F_{SR}\) and \(F_{HR}\) using 1D DFT. The original FFL is denoted as \(FFL_{2D}\) and the modified FFL is denoted as \(FFL_{1D}\). The loss function \(L\) of the proposed frequency-aware model is defined as:
\[L(G,D,L1,FFL_{1D},FFL_{2D},L_{align})= \tag{4}\] \[aL_{adv}(G,D)+bL1(SR,HR)+cFFL_{1D}(SR,HR)\] \[+dFFL_{2D}(SR,HR)+eL_{align}(G,D)\]
The \(L_{adv}\) stands for the adversarial loss; the \(L1\) stands for the mean absolute error; the \(L_{align}\) stands for the distance of high-frequency information between the HR and SR OCT images:
\[L_{align}=\sum_{i=1}^{3}|HF_{D}^{i}-HF_{G}^{i}| \tag{5}\]
The factors \(a\), \(b\), \(c\), \(d\), and \(e\) are set to 1, 10, 1, 1, and 1 respectively. We aim to solve the following minmax optimization problem:
\[G^{*}=\arg\min\max L(G,D,L1,FFL_{1D},FFL_{2D},L_{align}) \tag{6}\]
## 3 Results
### Data collection
We perform a large-scale quantitative analysis on an existing coronary image dataset. The dataset was imaged using a commercial OCT system (Thorlabs Ganymede, Newton, NJ) [21]. The specimens were obtained in compliance with ethical guidelines and regulations set forth by Institutional Review Board (IRB), with de-identification to ensure the privacy of the subjects. A total of 2996 OCT images were obtained from 23 specimens, with a depth of 2.56 mm and a pixel size of 2 \(\mu\)m x 2 \(\mu\)m within a B-scan. The width of the images varied from 2 mm to 4 mm depending on the size of the specimen.
In addition to the large-scale coronary, we also confirmed the generalizability of applying the proposed method to two small dataset: one from _ex vivo_ fish corneal and the other from _in vivo_ rat retina. Two fish corneal OCT images were obtained from the same Thorlab OCT images following the same imaging protocol as coronary imaging. Fifty rat retinal images were obtained from a Heidelberg Spectralis SDOCT system. The system has an axial resolution of 7 \(\mu\)m and a lateral resolution of 14 \(\mu\)m. The maximum field of view is 9 mm x 9mm. Animal imaging procedure was in accordance with protocols approved by the Institutional Animal Care and Use Committee at the Stevens Institute of Technology, and with the principles embodied in the statement on the use of animals in ophthalmic and vision research adopted by the Association for Research in Vision and Ophthalmology. The details of the experimental procedure are described in [31].
### Experimental setup
The OCT LR images from coronary images and eye images were generated by cropping the spectrum data. The optical resolution of OCT systems will be decreased by reducing the bandwidth of the spectrum. We used \(\frac{1}{2}\), \(\frac{1}{3}\), and \(\frac{1}{4}\) (denoted as X2, X3, and X4 respectively) of the raw spectrum data by central cropping. We used Hanning window to filter the raw spectrum data. Next, the filtered spectrum data were processed by FFT to get complex OCT data. Then, the norm of the complex OCT data was converted to dB scale. The background subtraction was performed to remove noises in the OCT data. The LR OCT images were used as the inputs for the DL networks. The OCT images were randomly shuffled into five folds for cross-validation.
### Network implementation
We implemented our frequency-aware model using Pytorch. For downsampling layers, we used 2D convolutional layers with a stride of 2 followed by the Instance normalization layer and LeakyRelu activation layer with a negative slope of 0.2. For the upsampling layers, we used 2D transpose convolutional with a stride of 2 followed by the same Instance normalization and LeakyRelu activation. We used 16 residual blocks as the bottleneck, each containing 2 convolutional layers. For the implementation of previous DL algorithms (MSRN, RDN, RDU, RCAN), we were inspired by the designs in [21]. We implemented the existing DL algorithms in GAN architecture.
### Training details
The image intensities were normalized to a range of [0,1]. The training protocol was performed five times for cross-fold validation. We randomly sampled 16 non-overlapping LR patches of
size 64 x 64 pixels as input during training. The normalized images were augmented through random flipping to prevent overfitting. The optimization routine was carried out using the Adam algorithm, with an initial learning rate of \(10^{-4}\). A total of 200 epochs were executed to ensure convergence. The training process utilized one RTX A6000 GPU. Each training iteration on a single data fold consumed approximately 2 hours.
### Evaluation metrics
We used peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [32] to measure the quality of SR images. The PSNR calculates pixel-wise differences between the HR image and the SR image, which is defined as:
\[PSNR=10\log_{10}(\frac{255^{2}}{MSE}) \tag{7}\]
The MSE represents the cumulative squared error between the HR and SR OCT images:
\[MSE=\frac{\sum_{m=1}^{M}\sum_{n=1}^{N}|HR(m,n)-SR(m,n)|^{2}}{M*N} \tag{8}\]
The SSIM focuses on structural similarity between the HR image and the SR image, which is defined as:
\[SSIM=\frac{(2\mu_{HR}\mu_{SR}+c_{1})(2\sigma_{HR,SR}+c_{2})}{(\mu_{HR}^{2}+\mu_ {SR}^{2}+c_{1})(\sigma_{HR}^{2}+\sigma_{SR}^{2}+c_{2})} \tag{9}\]
where \(\mu_{HR}\) is the pixel mean of the HR image; \(\mu_{SR}\) is the pixel mean of the SR image; \(\sigma_{HR}\) is the variance of HR; \(\sigma_{SR}\) is the variance of SR; \(\sigma_{HR,SR}\) is the covariance of HR and SR; \(c_{1}=(0.01*255)^{2}\) and \(c_{2}=(0.03*255)^{2}\) which are two variables to stabilize the division with weak denominator.
Figure 3: Frequency analysis of the SR OCT images generated from LR OCT data acquired using factors of X2, X3, and X4. Compared to existing methods, our frequency-aware model is capable of super-resolving OCT images with less spectrum bias, which is confirmed by frequency analysis.
To evaluate the frequency difference, we define a frequency-level metric, namely Scaled Frequency Distance (SFD), which is defined as:
\[SFD=\sum_{u}^{M-1}\sum_{v}^{N-1}\left|\frac{|F_{SR}(u,v)|-|F_{HR}(u,v)|}{|F_{HR}(u,v)|}\right| \tag{10}\]
### Analysis on spectral bias
We perform frequency analysis to evaluate the spectral bias of our frequency-aware model and other DL algorithms. We apply 2D DFT to the HR and SR OCT images, after which we average
\begin{table}
\begin{tabular}{c c c c|c c c|c c} \hline \hline \multirow{2}{*}{MetricsMetrics} & \multicolumn{3}{c}{X2} & \multicolumn{3}{c}{X3} & \multicolumn{3}{c}{X4} \\ \cline{2-10} & PSNR\(\uparrow\) & SSIM\(\uparrow\) & SFD\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & SFD\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) & SFD\(\downarrow\) \\ \hline MSRN & 30.2094 & 0.8484 & 0.3765 & 24.2997 & 0.4829 & 0.8761 & 23.6034 & 0.4216 & 0.7565 \\ RDU & 29.1010 & 0.7950 & 0.6737 & 23.8777 & 0.4628 & 0.8376 & 23.5086 & 0.4246 & 0.9369 \\ RDN & 30.3321 & 0.8496 & 0.3503 & 23.7528 & 0.4635 & 0.6560 & 23.7746 & 0.4371 & 0.6977 \\ RCAN & 29.5280 & 0.8261 & 0.3918 & 23.5904 & 0.4596 & 0.7638 & 22.2740 & 0.3462 & 0.8121 \\ Ours & 30.4713 & 0.8519 & 0.3273 & 25.7500 & 0.5904 & 0.4669 & 24.2867 & 0.4424 & 0.5287 \\ \hline \hline \end{tabular}
\end{table}
Table 1: PSNR, SSIM, and SFD results of OCT images reconstructed by MSRN, RDU, RDN, RCAN, and our frequency-aware model. Red indicates the best performance. All results are averaged based on five-fold cross-validation.
Figure 4: Generating SR OCT images of stent structure from LR image acquired using a factor of X4. The corresponding histology image is attached. Our model resolves the boundary between the stent and tissue with better details due to its frequency-awareness design. ROIs are marked by red rectangles. The scale bar represents 100\(\mu\)m.
the logarithm of the intensities for each A-line and plot the intensity values over the pixels. The frequency analysis is carried out by averaging the spectrum of the SR OCT images. The results are reported in Fig 3. As shown in Fig 3 (a), our frequency-aware model generates SR images with averaged spectrums that are similar to the HR images. The summed intensities for pixels, as shown in Fig 3 (b), confirm our frequency-aware model are less biased in spectrum distribution compared with other DL algorithms. On the other hand, existing DL algorithms generate SR OCT images with spectral bias in an unstable manner, as confirmed by Fig 3.
### Quantitative analysis on super-resolution performance
We compare the quantitative performance of our frequency-aware model to other DL algorithms. As shown in Table 1, our frequency-aware model generates SR OCT images with better PSNR, SSIM, and SFD scores compared to other deep learning algorithms. Together with Fig 3, we confirm our frequency-aware model generates SR OCT images with better spatial and frequency properties compared to other DL algorithms.
In Fig 4, a case of super-resolving an LR OCT image of a stent within the coronary artery is demonstrated. Coronary stent placement is an established treatment for CAD [33]. Imaging microstructures and tissues adjacent to stent struts are crucial in the clinic. It is critical to provide accurate morphological information on interactions between the stent and the vessel wall, for the purpose of evaluating the placement as well as the biocompatibility of the stent. The edges of the stent are considered to be high-frequency information in the OCT images, which are challenging to reconstruct for previous DL algorithms. As shown in Fig 4, previous DL algorithms generate blurred edges of the stent. Moreover, existing DL algorithms lead to artifacts on the interaction
Figure 5: Generating SR OCT image of suspicious macrophage regions from LR image acquired using a factor of X4. The amplitude of intensities of HR and SR images is attached. Our model resolves the accumulations of macrophages (indicated by the red arrows) without blurring effects. ROIs are marked by red rectangles. The scale bar represents 100\(\mu\)m.
between the stent and tissue, as shown in Fig 4 (e), (f), (g). With our frequency-aware model, the edges of the stent are resolved with detailed information that is similar to that of the HR image.
In Fig 5, we demonstrate a case of super-resolving an LR OCT image of suspicious accumulation of macrophages. Macrophages play a critical role in both the development and rupture of atherosclerotic plaques [34], which are thus important for the diagnosis of CAD. OCT has been demonstrated to be a viable technique for visualizing the accumulation of macrophages in the human coronary artery. Macrophages appear as 'bright spots' in OCT images [35], which are high-frequency information due to their sharp contrast with neighboring tissues. As shown in Fig 5, previous DL algorithms generate SR images with blurred macrophages, which will deteriorate the clinical diagnosis procedures. In contrast, our frequency-aware framework generates SR OCT images with clear macrophages. Thus, our frequency-aware model is capable of providing SR OCT images for human coronary samples with clinical meaningness.
### Application to super-resolve anterior segments of fish eye
Based on the setup in coronary imaging, we perform additional experiments on fish cornea using trained frequency-aware framework from previous section. We acquired the fish corneal OCT images using the same OCT system and imaging settings as the coronary dataset. We acquired three volumes of left and right fish eyes. Three representative OCT B-scans are used for the qualitative studies. The qualitative analysis of SR OCT images of fish corneal is shown in Fig 6. In particular, the dash circle in the first panel shows the alignment of corneal stroma is better resolved after super-resolving. The dash circle in the second panel highlights the iris region that is underneath the corneal regions. The dash circle in the third panel resolves the bowman's layer in corneal region. Overall, our frequency-aware framework is capable of generating SR OCT fish corneal images with sharper and finer textures. Without retraining, our frequency-aware framework has the potential to be transferrable to OCT corneal images obtained from the same OCT system.
Figure 6: Generating SR OCT images of anterior segments in fish eyes from LR images acquired using a factor of X3. ROIs are marked by red rectangles. The textures are highlighted by the dashed cycles. The scale bar represents 500\(\mu\)m.
### Application to super-resolve posterior segments of rat eye
We also conduct experiments to imaging posterior segments from animal model. A pigmented Long Evans rat from Charles River is used for OCT imaging. Retinal imaging requires a different optical design from benchtop OCT was used in coronary imaging and corneal imaging. The OCT images were acquired using Heidelberg Spectralis system. We retrained the frequency-aware framework using 96% of the images and used the rest of the images for testing. The SR OCT images of rat eyes generated by our frequency-aware framework are shown in Fig 7. In the first panel, the SR OCT image delineates the boundary around optical disc; and in the second panel, the SR OCT image better resolves the layer boundary within retinal regions (for example, inner nuclear layers). This experiment shows that our proposed frequency-aware framework, with adequate retraining, has the potential to be generalizable to OCT retinal images obtained from different OCT systems.
## 4 Discussion
To the best of our knowledge, it is the first study in OCT community to propose a frequency-aware framework for super-resolution. We design the proposed framework by modifying the convolution model architecture and loss functions to improve the frequency-aware capability. This is based on our investigation on the spectral bias towards the low-frequency components in the spectrum in existing studies. Our frequency-aware model generates SR OCT images with less spectral bias and better performance compared to existing framework. Our frequency-aware model is capable of generating SR OCT images with clinical meaningness, which is confirmed by qualitative analysis.
Another contribution lies in generalizability. Our preliminary study indicates great potential to be applied to multiple tissue types. We perform qualitative experiments on additional fish eye and rat eye dataset. Without retraining, our frequency-aware framework resolves anterior segments of fish eye, including corneal stroma, iris region, and downman's layer, acquired from the same OCT system. With adequate retraining, our frequency-aware framework is capable of resolving LR OCT images acquired from different systems. From the qualitative analysis of a rat eye dataset acquired from a different OCT system, we resolve the boundary around the optical disc and within retinal regions, with adequate retraining.
Figure 7: Generating SR OCT images of posterior segments in rat eyes from LR images acquired using a factor of X3. ROIs are marked by red rectangles. The textures are highlighted by the white arrows. The scale bar represents 500\(\mu\)m.
As an exploratory study on methodology development, this study, especially the eye imaging, is based on healthy samples. In the future, we plan to validate our super-resolution framework on pathological animal models to examine how much the improved resolution could facilitate the diagnosis and treatment in ophthalmology. Moreover, in addition to SD-OCT, we plan to validate our super-resolution framework on swept-source OCT system in which the signal is also acquired in spectral domain.
## 5 Conclusion
In this paper, we investigate the spectral bias of existing DL algorithms when generating SR OCT images. To mitigate the spectral bias, we develop a frequency-aware model that combines cGAN with frequency loss to super-resolve LR OCT images. Compared to existing DL algorithms, our approach produces SR OCT images with less spectrum bias, resulting in better preservation of textures. Additionally, our method generates SR coronary OCT images of superior quality, with higher PSNR and SSIM scores, as well as lower SFD scores. Our frequency-aware framework demonstrates the capability of generating SR OCT coronary images to provide better diagnosis and treatment. Moreover, our study also indicates the ability of the proposed framework to be generalized to corneal imaging and retinal imaging.
## 6 Funding
This work was supported in part by National Science Foundation (CRII-1948540 and CAREER-2239810, YG), New Jersey Health Foundation (YG), National Institute of Health (R01EY029298 and R01EY032222, JJKM).
## 7 Disclosures
The authors declare no conflicts of interest.
## 8 Data availability statement
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
## 9 Acknowledgement
The authors would like to thank Chaimae Gouya, Mohammed Attia, and Aaron Shamouil for their assistance in OCT data acquisition.
|
2305.12240 | Bridging Active Exploration and Uncertainty-Aware Deployment Using
Probabilistic Ensemble Neural Network Dynamics | In recent years, learning-based control in robotics has gained significant
attention due to its capability to address complex tasks in real-world
environments. With the advances in machine learning algorithms and
computational capabilities, this approach is becoming increasingly important
for solving challenging control problems in robotics by learning unknown or
partially known robot dynamics. Active exploration, in which a robot directs
itself to states that yield the highest information gain, is essential for
efficient data collection and minimizing human supervision. Similarly,
uncertainty-aware deployment has been a growing concern in robotic control, as
uncertain actions informed by the learned model can lead to unstable motions or
failure. However, active exploration and uncertainty-aware deployment have been
studied independently, and there is limited literature that seamlessly
integrates them. This paper presents a unified model-based reinforcement
learning framework that bridges these two tasks in the robotics control domain.
Our framework uses a probabilistic ensemble neural network for dynamics
learning, allowing the quantification of epistemic uncertainty via Jensen-Renyi
Divergence. The two opposing tasks of exploration and deployment are optimized
through state-of-the-art sampling-based MPC, resulting in efficient collection
of training data and successful avoidance of uncertain state-action spaces. We
conduct experiments on both autonomous vehicles and wheeled robots, showing
promising results for both exploration and deployment. | Taekyung Kim, Jungwi Mun, Junwon Seo, Beomsu Kim, Seongil Hong | 2023-05-20T17:20:12Z | http://arxiv.org/abs/2305.12240v2 | Bridging Active Exploration and Uncertainty-Aware Deployment Using Probabilistic Ensemble Neural Network Dynamics
###### Abstract
In recent years, learning-based control in robotics has gained significant attention due to its capability to address complex tasks in real-world environments. With the advances in machine learning algorithms and computational capabilities, this approach is becoming increasingly important for solving challenging control problems in robotics by learning unknown or partially known robot dynamics. Active exploration, in which a robot directs itself to states that yield the highest information gain, is essential for efficient data collection and minimizing human supervision. Similarly, uncertainty-aware deployment has been a growing concern in robotic control, as uncertain actions informed by the learned model can lead to unstable motions or failure. However, active exploration and uncertainty-aware deployment have been studied independently, and there is limited literature that seamlessly integrates them. This paper presents a unified model-based reinforcement learning framework that bridges these two tasks in the robotics control domain. Our framework uses a probabilistic ensemble neural network for dynamics learning, allowing the quantification of epistemic uncertainty via Jensen-Renyi Divergence. The two opposing tasks of exploration and deployment are optimized through state-of-the-art sampling-based MPC, resulting in efficient collection of training data and successful avoidance of uncertain state-action spaces. We conduct experiments on both autonomous vehicles and wheeled robots, showing promising results for both exploration and deployment.
## I Introduction
Learning-based robotic control has been an increasingly active research topic in recent decades, pushing the boundary
of robotic control tasks with new capabilities through model-free reinforcement learning [1, 2, 3, 4], model-based reinforcement learning [5, 6, 7, 8, 9, 10, 11], and robot dynamics learning [12, 13, 14, 15]. There are two key aspects that are commonly taken into account while deploying an intelligent robot. The first is to assure safety, and the second, if at all possible, is to achieve maximum performance.
Model-based methods have been regarded as being advantageous in robotic applications over pure model-free strategies for various reasons. Firstly, sample efficiency is essential since real-world samples are highly expensive in terms of time, labor, and finances [16]. Secondly, humans are primarily more intuitive about how to incorporate prior knowledge into a model compared to a policy or value function [14, 17]. Lastly, models are task-agnostic and may thus be utilized to optimize arbitrary cost functions, whereas the majority of model-free policies are bounded to a specific task.
These benefits have led to recent control paradigms that include dynamics learning into a model-based control framework, outperforming the conventional methods that rely on manually tuned models based on human insights into physics [10, 18, 19]. This scheme may also be described as a model-based reinforcement learning problem, as the model is continuously trained to provide a more accurate representation of the system, while an optimization-based controller serves as the policy. Such a control framework possesses the inherent flexibility to dynamically adjust during execution, for instance, to changing target speeds or actuator limits. Gaussian Process (GP) is a typical regression method in model learning problems due to its advantageous features, such as the ability to cope with small data samples and incorporate prior knowledge [6, 20]. However, it does not scale well to large datasets and to high-dimensional systems. To mitigate this issue, recent robotics studies have benefited from neural networks owing to their universal function approximation properties with high model capacity.
The limitation of these model-based control frameworks is that the accuracy of the learned model drastically affects the control performance [21]. However, the real-world is fraught with uncertainty, making the model learning problem complex and challenging. There are two primary viewpoints within the robotics community that tackle inevitable uncertainty in opposing ways.
A spontaneous solution would be to prevent the robot from entering uncertain state-action spaces to evade unpredictable motions. This is a straightforward strategy, given that an inaccurate prediction of the dynamics model might result in substantial losses in control performance and, in the worst case, instability. We refer to such an uncertainty-averse strategy as _uncertainty-aware deployment_ (or safety-aware deployment), and it is usually applied during execution after completing the robot's training.
The exact opposite of the above strategy is to deliberately visit unexplored state-action spaces that provide high uncertainty. The hypothesis is that by trying to explore uncertain segments of the learned model, it can learn more precise representation of the system with fewer samples and discover behaviors with higher performance. Such uncertainty-seeking strategy is known as _active exploration_, and it is considered during the early phase of training data exploration.
We argue that these opposite yet analogous strategies addressing uncertainty must be entailed concurrently in a learning-based control framework. No matter how much data is gathered through active exploration, the uncertainty that eventually has not been eliminated needs to be monitored and avoided throughout robot deployment. Conversely, when we take the model uncertainties for risk-averse planning, they are only meaningful if the model is trained with the right data; otherwise, the controller will produce erroneous motions. Nevertheless, these two approaches have largely been studied independently, and there is little literature that integrates them in a seamless manner.
This paper introduces a unified model-based reinforcement learning framework in a robot dynamics learning setting, seamlessly bridging active exploration and uncertainty-aware deployment to fulfill both safety and maximum performance. Fig. 1 shows that the proposed framework consists of two phases. In _exploration phase_, a parallelized ensemble neural network serves as the robot dynamics and outputs the estimated posterior distribution of the next state. In _deployment phase_, the neural network dynamics trained during the active exploration phase is applied directly to perform uncertainty-aware control. Both tasks are optimized using the state-of-the-art sampling-based Model Predictive Control (MPC), which, owing to its property, allows the insertion of arbitrary cost functions after training.
In summary, the contributions of this paper are as follows:
* We introduce a fully parallelized probabilistic ensemble neural network that is sensitive to uncertainty in order to learn robot dynamics.
* We separate epistemic uncertainty from model disagreement for active exploration.
* We transfer the neural network dynamics for uncertainty-aware deployment with minimal modification.
* We conduct extensive experiments with our framework on both autonomous vehicles and wheeled robots, outperforming the compared methods by a large margin.
## II Related Works
### _Model Learning with Uncertainty_
A regression model capable of accurately capturing the nonlinear dynamics is essential for model-based control in uncertain environments with nonlinearity. Data-driven models approximating the robot's forward dynamics have been extensively studied in numerous prior research, including GP [6, 20], time-varying linear models [22], set-membership identification [23, 24], locally weighted projection regression [25, 26], etc. GP has long been favored in dynamics learning given its simplicity and its inherent nature to account for both types of uncertainty: aleatoric uncertainty and epistemic uncertainty.
Aleatoric uncertainty arises from intrinsic stochasticities present in a system, such as observation noise and process
noise. Epistemic uncertainty, however, originates from a lack of sufficient data to uniquely determine the underlying characteristics of the target system in an approximated model. An ideal model trained with an infinite number of data may have zero epistemic uncertainty, but it is not feasible to eradicate it completely in a realistic setting.
Neural networks are known for being tractable to large training datasets while assuring constant-time inference and having the potential to learn highly complex and even non-smooth robot dynamics. The recent breakthrough in the field of dynamics learning is that neural networks may also capture both types of uncertainty by using an ensemble of probabilistic neural networks [27]. Vlastelica et al. [28] proposed to quantify epistemic uncertainty by taking empirical variances over the means and variances of the propagated posterior distribution of each ensemble. We investigate further to determine a more exact epistemic uncertainty in order to provide a better exploration bonus (which is discussed in Section IV-A).
### _Active Exploration_
Human-supervised data collection is time-consuming and can be biased by the fact that manual control of the robot may not encompass all of its possible motions. As a remedy, it was suggested that the agent should plan exploration by directing itself to the states that yield the largest information gain, therefore performing active exploration [29]. In general, the measured model uncertainty is leveraged as the exploration motivation to minimize the required amount of data to gather [30].
We argue that the exploration strategy should be _imaginary_ and _online_. If the agent is _reactive_ to immediate curiosity [31], which is the opposite to being _imaginary_, it disregards the long-term effects of control actions to the system. Consequently, it must struggle in realms of underactuated continuous control [32]. In addition, since most robotic systems are intrinsically stochastic, the real trajectories would quickly diverge from the planned ones. Therefore, it is desirable to plan future trajectory _online_ at every time step, rather than relying on the optimized trajectory found _offline_ as in [32].
Several studies address model-based active exploration based on the above hypothesis. Examples of this line of research include the work in [33, 34, 35], in which model-free policies are employed to maximize expected information gain in a specific task. In [36], GP is used for model learning, while a gradient-based MPC handles the information gain acquired from the GP model. In this work, we adopt sampling-based MPC as a policy and demonstrate that it is scalable to handle uncertainty derived from neural networks.
### _Safe Deployment_
Uncertainty-averse deployment has been addressed as a major concern throughout robotic control history, and there is a growing interest in its application to model-based reinforcement learning. The uncertain actions informed by the learned model may result in unstable motions or even catastrophic failure.
Kahn et al. [9] incorporated the estimated uncertainty as a cost to learn collision avoidance. Yu et al. [37] penalized uncertainty of the dynamics in an offline reinforcement learning setting to resolve the distributional shift issue. Wang et al. [38] treated trajectories having greater uncertainty than a certain threshold as constraint conditions in an application of rough terrain navigation. Similar to [38], we penalize uncertainty with both soft and hard costs to maximize the control performance while ensuring safety.
## III Methods
### _Probabilistic Ensemble Neural Network_
Consider a general continuous time system dynamics \(\dot{\mathbf{x}}_{t+1}=\mathbf{F}(\mathbf{x}_{t},\mathbf{u}_{t})\) where \(\mathbf{F}\) is a nonlinear function, \(\mathbf{x}_{t}\in\mathbb{R}^{n}\) and \(\mathbf{u}_{t}\in\mathbb{R}^{m}\) are the observed state vector and applied action input at time \(t\), respectively. To account for both aleatoric and epistemic uncertainty, we use Probabilistic Ensemble Neural Network (PENN) to approximate the unknown dynamics \(\mathbf{F}\) via data-driven manner [27, 39].
Each neural network predictive model \(\mathbf{E}_{b}\) of the ensemble members outputs a Gaussian distribution conditioned on the model input:
\[\mathbf{E}_{b}(\mathbf{x}_{t},\mathbf{u}_{t};\theta_{b})=\mathcal{N}(\mathbf{ \mu}_{\theta_{b}}(\mathbf{x}_{t},\mathbf{u}_{t}),\mathbf{\Sigma}_{\theta_{b}}( \mathbf{x}_{t},\mathbf{u}_{t}))\,, \tag{1}\]
where \(\mathbf{\mu}\) and \(\mathbf{\Sigma}\) are the mean and the diagonal covariance parameterized by the model parameter \(\theta_{b}\). Given a collected dataset of transitions \(\mathcal{D}\triangleq\{(\mathbf{x}_{t},\mathbf{u}_{t},\dot{\mathbf{x}}_{t+1}) \}_{t=0}^{N-1}\) with the size of \(N\), the probabilistic model can be trained with the Gaussian Negative Log-Likelihood (NLL) loss function:
\[\begin{split}\mathcal{L}_{\text{train}}(\theta_{b})=\sum_{t=0}^{N -1}\big{[}\mathbf{\mu}_{\theta_{b}}-\dot{\mathbf{x}}_{t+1}\big{]}^{\top}\mathbf{ \Sigma}_{\theta_{b}}^{-1}\big{[}\mathbf{\mu}_{\theta_{b}}-\dot{\mathbf{x}}_{t+1} \big{]}\\ +\log\left[\det\mathbf{\Sigma}_{\theta_{b}}\right]\,.\end{split} \tag{2}\]
Note that random initialization is applied individually to each ensemble member, resulting in mutually independent initial weights. Consequently, the only difference between the ensemble members is their initial model parameters.
Consequently, the stochastic dynamics function predicting the next state can be interpreted as a Gaussian Mixture Model (GMM) of the \(B\) ensemble members:
\[\mathbf{F}(\theta_{1:B})=\sum_{b=1}^{B}\pi_{b}\mathbf{E}_{b}(\mathbf{x}_{t}, \mathbf{u}_{t};\theta_{b})\,,\quad 0\leq\pi_{b}\leq 1\,. \tag{3}\]
Here, we simply use equal weights \(\forall\pi_{b}=\frac{1}{B}\). Then, the next state of the system during iterative trajectory rollouts can be defined as:
\[\mathbf{x}_{t+1}=\mathbf{x}_{t}+\mathbb{E}\left[\mathbf{E}_{b}(\mathbf{x}_{t}, \mathbf{u}_{t};\theta_{b})\right]\Delta t\,,\quad b\sim\{1,\dots,B\}\,. \tag{4}\]
### _Model Predictive Active Exploration_
To boost the model learning process, we introduce an active exploration strategy in the _training phase_. We encourage the robot to seek automatically to visit currently unforeseen state-action pairs for training data. To this end, ensemble model disagreement \(D\) can be used as exploration bonuses [33]. One
possible solution is to measure the empirical variance over the predictive samples \(\tilde{\mathbf{x}}_{t+1}\) from the posterior distribution of each ensemble member:
\[\mathfrak{U}(\mathbf{x}_{t},\mathbf{u}_{t})=\mathrm{Var}\left(\left\{\tilde{ \mathbf{x}}_{t+1}\sim\mathbf{E}_{b}(\mathbf{x}_{t},\mathbf{u}_{t})\right\}_{b= 1}^{B}\right), \tag{5}\]
which is referred as _uncertainty sampling_. However, this method cannot distinguish epistemic uncertainty from aleatoric uncertainty. Importantly, the robot should only be inquisitive about the epistemic uncertainty and not the aleatoric uncertainty; otherwise, it will be obscure if this exploration bonus resulted from a lack of knowledge or from inherent and irreducible system stochasticity, such as model ambiguity or sensor noise [32].
Kullback-Leibler (KL) divergence is the most well-known metric for measuring the disagreement between distributions to obtain epistemic uncertainty. However, KL divergence is asymmetric and can only be used to determine the distance between two distributions. Therefore, we employ Jensen-Renyi Divergence (JRD)
\[\mathrm{JRD}\left(\mathbf{E}_{1:B};\alpha\right)\triangleq H_{\alpha}\left( \sum_{b=1}^{B}\pi_{b}\mathbf{E}_{b}\right)-\sum_{b=1}^{B}\pi_{b}H_{\alpha} \left(\mathbf{E}_{b}\right), \tag{6}\]
where the Renyi entropy [40] of a random variable \(X\) and its corresponding density function \(p(x)\) is defined as:
\[H_{\alpha}(X\mid x\in X)=\frac{1}{1-\alpha}\log\int p(x)^{\alpha}\mathrm{d}x\,. \tag{7}\]
While an approximation of the JRD can be obtained via Monte-Carlo sampling, it is computationally expensive, thereby precluding real-time implementation. To circumvent this problem, Wang et al. [41] introduce a closed-form JRD with quadratic entropy (\(\alpha=2\)) for analytical measurement of the divergence of a GMM:
\[\begin{split} D(\mathbf{x}_{t},\mathbf{u}_{t};\mathbf{E}_{1:B})=- \log&\Bigg{[}\frac{1}{B^{2}}\sum_{i,j}^{B}\mathfrak{D}\left( \mathbf{E}_{i},\mathbf{E}_{j}\right)\Bigg{]}+\\ &\frac{1}{B}\sum_{i}^{B}\log\left[\mathfrak{D}\left(\mathbf{E}_{i },\mathbf{E}_{i}\right)\right],\end{split} \tag{8}\]
where
\[\mathfrak{D}\left(\mathbf{E}_{i},\mathbf{E}_{j}\right)=\frac{1}{|\mathbf{ \Phi}|^{\frac{1}{2}}}\exp\left(-\frac{1}{2}\mathbf{\Delta}^{\top}\mathbf{\Phi }^{-1}\mathbf{\Delta}\right), \tag{9}\]
\(\mathbf{\Phi}=\mathbf{\Sigma}_{\theta_{i}}+\mathbf{\Sigma}_{\theta_{j}}\) and \(\mathbf{\Delta}=\boldsymbol{\mu}_{\theta_{i}}-\boldsymbol{\mu}_{\theta_{i}}\).
Using this metric, Shyam et al. [33] presented a novel active exploration strategy by measuring the ensemble model disagreement along the future predictions. The proposed method trained a model-free policy to harness this time-varying and intricate exploration bonus. However, due to the sample inefficiency and the inability to adjust the trained policy to achieve modified task-specific objectives, this method has difficulty being used to robotic applications. To make these limitations tractable, we here employ Model Predictive Path Integral (MPPI) control [8] as a policy.
MPPI is the state-of-the-art sampling-based MPC that enables real-time implementation with the aid of Graphic Processing Units (GPUs) due to its parallelizable structure [18]. Denoting by \(U=\{\mathbf{u}_{0},\ldots,\mathbf{u}_{T-1}\}\) with a fixed time horizon \(T\), the MPPI algorithm seeks an optimal control sequence \(U^{*}\) such that:
\[U^{*}=\operatorname*{argmin}_{U}\mathbb{E}\left[\phi(\mathbf{x}_{T})+\sum_{t= 0}^{T-1}\mathcal{L}\left(\mathbf{x}_{t},\mathbf{u}_{t}\right)\right], \tag{10}\]
where \(\phi\left(\cdot\right)\) is a state-dependent terminal cost. In general, the running cost with the quadratic control cost with control weight matrix \(\mathbf{R}\) takes the form:
\[\mathcal{L}\left(\mathbf{x}_{t},\mathbf{u}_{t}\right)=q\left(\mathbf{x}_{t} \right)+\frac{1}{2}\mathbf{u}_{t}^{\top}\mathbf{R}\mathbf{u}_{t}\,. \tag{11}\]
Note that the state-dependent running cost \(q\left(\cdot\right)\) can be arbitrary functions, allowed by the sampling-based optimization scheme that is capable of solving non-convex problems. Consequently, we can include the information gain obtained from the model disagreement (8) into the controller's objective (11) in order to jointly achieve task-dependent cost optimization and active exploration:
\[\mathcal{L}_{\text{active}}\left(\mathbf{x}_{t},\mathbf{u}_{t}\right)=q\left( \mathbf{x}_{t}\right)+\frac{1}{2}\mathbf{u}_{t}^{\top}\mathbf{R}\mathbf{u}_{t }-w_{D}\,D(\mathbf{x}_{t},\mathbf{u}_{t})\,, \tag{12}\]
where \(w_{D}>0\) is a weighting constant. As a result, the robot is encouraged to choose behaviors that lead to exploring uncertain state-action spaces of the dynamic model, while still attempting to minimize the task-specific error.
The above characteristic facilitates the robotics applications of this method in three ways. First, the controller actively seeks states and actions having high epistemic uncertainty, i.e., for which there are little training data, in the vicinity of the set task objective. Thus, the trained model is completely tailored for instantaneous deployment. Second, it is conceivable to determine when the _exploration phase_ may be terminated; specifically, it is when the robot optimizes mostly for the task cost. As training data accumulate, the ensemble models will gradually converge toward increasingly similar predictions, resulting in a progressive decrease in model disagreement. Finally, the robot will be able to adapt to the changes in task costs and to solve the modified tasks without re-training.
### _Uncertainty-Aware Deployment_
After sufficient training time, the robot may be deployed to perform some designated tasks. In the _deployment phase_, we generally regard that there are no human supervisors and no further dynamics update. Recent research has shown that parallel ensemble neural networks are capable of learning dynamics online [42]. However it is orthogonal to the goal of this paper, and our method also can benefit from this consideration. For successful operations, the robot must avoid unpredictable situations, which can be accomplished by incorporating uncertainty into safety violations, regardless of the source of uncertainty. For instance, a data point may have no disagreement across ensembles due to sufficient training
data (low epistemic uncertainty), but we must still avoid this point if the variance of the predicted posterior distribution is high (high aleatoric uncertainty).
In this phase, we use a variant of the MPPI algorithm called Smooth MPPI (SMPPI) [43]. SMPPI shares the same information-theoretic roots as MPPI, and it also benefits from the structure of parallel trajectory evaluation. However, SMPPI lifts the control variables as derivative actions, so that the noisy sampling is performed on a higher order domain \(\dot{U}=\{\dot{\mathbf{u}}_{0},\ldots,\dot{\mathbf{u}}_{T-1}\}\). Let us denote the resulting optimal control trajectory after the optimization process as \(\dot{U}_{i+1}^{*}\) at MPC iteration step \(i\). Then, the optimal action trajectory \(U_{i+1}^{*}\), which will be used as the actual commands sent to the robot, is obtained by the integral with respect to the MPC iteration horizon: \(U_{i+1}^{*}=U_{i}^{*}+\dot{U}_{i+1}^{*}\Delta i\), where \(\Delta i\) represents the control updating period of MPC and is commonly equal to \(\Delta t\). This new action trajectory update law allows the controller to rapidly respond to changing environments while using significantly lower sampling variance, thus alleviating the chattering in resulting commands. Furthermore, we can apply an extra action smoothing cost \(\Omega(U)\) along trajectory rollouts:
\[\Omega(U)=\sum_{t=1}^{T-1}(\mathbf{u}_{t}-\mathbf{u}_{t-1})^{\top}\mathbf{\omega} \left(\mathbf{u}_{t}-\mathbf{u}_{t-1}\right), \tag{13}\]
where \(\mathbf{\omega}\in\mathbb{R}^{m\times m}\) is the weighting parameter. Such action cost was not eligible in the MPPI baseline with non-affine dynamics because it violates the information theoretic derivation [43]. As a result of these smoothing effects, SMPPI is not suitable for rapid exploration in underactuated continuous control systems but is beneficial for deployment.
To achieve risk-averse planning, taking into account both aleatoric and epistemic uncertainty, we employ the results of _uncertainty sampling_ (5) as uncertainty measurements. A simple addition of the quantified uncertainty to the cost function would make the controller uncertainty-aware, as the following form:
\[\mathcal{L}_{\text{naive}}\left(\mathbf{x}_{t},\dot{\mathbf{u}}_{t},\mathbf{u }_{t}\right)=q\left(\mathbf{x}_{t}\right)+\frac{1}{2}\dot{\mathbf{u}}_{t}^{ \top}\mathbf{R}\dot{\mathbf{u}}_{t}\,+w_{1}\mathbb{U}(\mathbf{x}_{t},\mathbf{ u}_{t}), \tag{14}\]
where \(w_{1}>0\) is a weighting constant. It is important to point out that the measured uncertainty through sampling is not differentiable with respect to state and action. Sampling-based MPC is capable of optimizing any arbitrarily crafted cost functions, making such a form of uncertainty tractable. Similarly, we take use of this trait by imposing an impulse-like penalty where uncertainty exceeds a threshold \(\xi\):
\[\mathcal{L}_{\text{hybrid}}\left(\mathbf{x}_{t},\dot{\mathbf{u}} _{t},\mathbf{u}_{t}\right)=q\left(\mathbf{x}_{t}\right)+\frac{1}{2}\dot{ \mathbf{u}}_{t}^{\top}\mathbf{R}\dot{\mathbf{u}}_{t}\,+w_{1}\mathbb{U}( \mathbf{x}_{t},\mathbf{u}_{t})+\\ w_{2}\,I\Big{(}\Big{\{}\Big{|}\mathbb{U}(\mathbf{x}_{t},\mathbf{ u}_{t})\Big{|}>\xi\Big{\}}\Big{)}, \tag{15}\]
where \(w_{2}\gg w_{1}\) is a weighting constant and \(I\) is an indicator function. It is an intuitive consideration because we never want to allow the robot to take actions that are completely unpredictable.
Following is the final objective function of SMPPI:
\[\dot{U}^{*}=\operatorname*{argmin}_{U}\mathbb{E}\Bigg{[}\phi( \mathbf{x}_{T})+ \Omega(U+\dot{U}\Delta i) \tag{16}\] \[+\sum_{t=0}^{T-1}\mathcal{L}_{\text{hybrid}}\left(\mathbf{x}_{t}, \dot{\mathbf{u}}_{t},\mathbf{u}_{t}\right)\Bigg{]}.\]
### _Implementing Neural Network Vehicle Dynamics_
Given that we have made no assumption on the specific robotic platform in our framework, we argue that it is applicable to any kind of robotic system. In this work, we focus on autonomous vehicle and wheeled robot applications to validate our idea. We choose the robot's state and action input based on the dynamic bicycle model since it provides a good trade-off between model accuracy and simplicity for real-time implementation [13]. The dynamic state variable \(\mathbf{x}=(v_{x},v_{y},r)^{\top}\) consists of longitudinal velocity, lateral velocity, and yaw rate. These states can be easily measured by using GPS and IMU sensors. The action input \(\mathbf{u}=(\delta,v_{des})^{\top}\) consists of the steering angle and desired speed. The desired speed is sent to the low-level Proportional-Integral (PI) controller, which determines the throttle and brake.
Since these state and action variables are simplified representations of the full robot dynamics, there exists irreducible model ambiguity. For example, the roll and pitch motions are not considered in the bicycle model. Automatic gear shifting encompasses non-smooth and time-varying dynamics characteristics, hence increasing the lower bound of model bias. We alleviate this problem by providing the history of state-action pairs to the neural network input so that it can extract contextual information [44, 45]. The history length \(H\) must be carefully determined, because as \(H\) increases, the state-action space that has to be discovered becomes exponentially larger, making the exploration problem more difficult. We select \(H=4\) in this work through an ablation study in the history length (see Section VI-A). Although LSTM [46] and GRU [47] show slightly better prediction performances than Multi-Layer Perceptron (MLP) (see Section VI-A), we choose MLP since its operations are fully parallelizable.
We take the parallel ensemble MLP implementation by _philipball_[48] and adapt it for our task. Let us denote the weight and the bias of a layer of an ensemble model \(\mathbf{E}_{b}\) as \(\mathbf{W}_{b}\) and \(\mathbf{b}_{b}\), respectively. Then, the parameters of all ensemble models can be represented in the form of batches of matrices: \(\mathbf{W}=[\mathbf{W}_{1}^{\top},\ldots,\mathbf{W}_{B}^{\top}]^{\top}\), \(\mathbf{b}=[\mathbf{b}_{1},\ldots,\mathbf{b}_{B}]^{\top}\). First, we initialize the parameters of each ensemble model according to the size of the layer input \(n_{\text{in}}\):
\[\mathbf{W}_{b},\,\mathbf{b}_{b}\sim\mathcal{U}\left(-\sqrt{\frac{1}{n_{\text{ in}}}},+\sqrt{\frac{1}{n_{\text{in}}}}\right),\forall b\in\{1,\ldots,B\}, \tag{17}\]
where \(\mathcal{U}\) is uniform distribution. While training the model with collected data, the forward propagation of each ensemble model is computed through
\[\mathbf{o}_{b}=\mathbf{W}_{b}^{\top}\mathbf{z}+\mathbf{b}_{b},\quad\forall b \in\{1,\ldots,B\}, \tag{18}\]
where \(\mathbf{z}\) and \(\mathbf{o}\) are the input and the output of the layer, respectively. The gradients of the Gaussian NLL loss function (2) can be obtained through automatic differentiation packages such as Pytorch autograd[49]. This process is identical to the common way we use MLP. While using the model as the dynamics of the robot, on the other hand, the forward propagation of the entire ensemble model is defined as batch matrix-matrix multiplication and addition:
\[\left[\mathbf{o}_{1},\ldots,\mathbf{o}_{B}\right]^{\top}=\mathbf{W}^{\top} \odot\left[\mathbf{z},\ldots,\mathbf{z}\right]^{\top}\oplus\mathbf{b}, \tag{19}\]
where \(\odot\) and \(\oplus\) are element-wise operations. This parallelized forward propagation (19) can be implemented using Pytorch badDmm function. Therefore, with the aid of sufficient GPU resources, we can simultaneously predict the next state and obtain uncertainty using PENN dynamics in a fixed-time duration regardless of the number of ensembles. This strategy maximizes the benefits of the parallel sample evaluation property of the MPPI framework.
## IV Experiments
Our experimental evaluations address the following key questions: **(Q1)** Can our method explore the state-action space with sufficient sample efficiency (see Section IV-A)? **(Q2)** Can the PENN taught with active exploration be used for long horizon planning, even if the task objective has been modified after training (see Section IV-B)? **(Q3)** Can our method be turned directly into an uncertainty-aware controller with minimal modification (see Section IV-C)? **(Q4)** Is our method scalable to other robot platforms (see Section IV-D)?
We evaluate **Q1-Q3** in a high-fidelity vehicle dynamics simulator - IPG CarMaker. We build the modeling and simulation system by integrating the algorithms using ROS 2 [50]. A Land Rover Defender 110 is used as the control vehicle. The low-level PI controller runs at 100 Hz, and the control parameters \(K_{\text{P}},K_{\text{I}}\) are set to {0.35, 1.0} for throttle and {0.18, 0.3} for brake. To simulate sensor noise, we add 10% of i.i.d. Gaussian noise to each dynamic state observation in \(\mathbf{x}\). MPPI is used in the active exploration experiments **(Q1)**, and SMPPI is used in the deployment experiments **(Q2-Q3)**. Table I lists the parameters of two controllers, while the remaining parameters not specified in this paper are taken from the SMPPI implementation [43].
We use a four-layer MLP with the hidden layer sizes {40, 80, 120, 40} throughout all experiments. Rectified Linear Unit (ReLU) [51] is used as activation functions. Similar to [27], we use five ensembles (\(B=5\)) for real-time implementation.
### _Model Predictive Active Exploration_
**Experimental Setup.** In this experiment, we evaluate the effectiveness of the active exploration strategy **(Q1)**. We let the robot seek for training data on a large flat ground with 0.7 friction coefficient. The task objective is to maintain the 50 km/h target speed (\(v_{\text{target}}\)) while staying inside the space boundary. Accordingly, the task-dependent running cost \(q(\mathbf{x}_{t})\) is defined as follows:
\[\begin{split} q(\mathbf{x}_{t})&=\alpha_{1}\text{ Track}(\mathbf{x}_{t})+\alpha_{2}\text{Speed}(\mathbf{x}_{t})\,,\\ \text{Track}(\mathbf{x}_{t})&=(0.9)^{t}\,10000\, \textbf{M}(p_{x},p_{y})\,,\\ \text{Speed}(\mathbf{x}_{t})&=\left(\sqrt{{v_{x}}^{2}+{ v_{y}}^{2}}-v_{\text{target}}\right)^{2}.\end{split} \tag{20}\]
\(\alpha_{(.)}\) are empirically tuned weighting constants. The track cost imposes a hard penalty to prevent collisions. The given map **M** indicates whether the robot position \((p_{x},p_{y})\) is at the outside of the space boundary.
We assume that there are human monitors who supervise the _exploration phase_. They disengage the robot's autonomy before the robot reaches the space boundary or before the robot enters dangerous situations. We empirically assume this period as 30 s. While the human supervisors setting up the robot for a new trial, the PENN is trained with the collected data offline. We refer to this process as one _iteration_ for exploration.
**Methods Evaluated.** We evaluate the following methods including ours:
#### Iv-A1 Ours
a model predictive active exploration strategy, which optimizes the objective function of (12). The epistemic uncertainty measured by JRD (8) serves as the information gain. We set the information gain weighting constant \(w_{D}\) as 100.
#### Iv-A2 Uncertainty Sampling (US)
an uncertainty sampling counter-part of our method, which uses the uncertainty quantified through sampling over each ensemble posterior distribution as the information gain. The substitution of JRD (8) with uncertainty sampling (5) in (12) is used as its running cost. We use the same weighting \(w_{D}=100\) as our method since the quantified information gain in both methods is of similar magnitude in our implementation.
#### Iv-A3 Jensen-Renyi Divergence Reactive Exploration (JDRX)
a reactive counter-part of our method [33]. We simply implement this based on our method by setting the planning horizon to \(T=1\), which explores greedily without planning based on the experience collected so far.
#### Iv-A4 Random Noise (RN)
a random exploration strategy, which injects noise in the action input. Its objective function is (11) and there is no information gain. We sample random noise from the Gaussian distribution with the same variances as the sampling distribution used by MPPI.
All the compared methods are evaluated with the same control parameters and the same model training setting with a fixed random seed. While training the model offline, we
\begin{table}
\begin{tabular}{c|c|c} Parameters & MPPI [18] & SMPPI [43] \\ \hline \(\Delta t\) & 0.1 s & 0.1 s \\ \(\Delta i\) & 0.1 s & 0.1 s \\ \(K\) & 5,000 & 5,000 \\ \(T\) & 10 & 35 \\ Sampling Variance & \(\text{Diag}(4.0,3.0)\) & \(\text{Diag}(1.6,0.4)\) \\ \(\mathbf{\omega}\) & N/A & \(\text{Diag}(0.4,0.1)\) \\ \end{tabular}
\end{table} TABLE I: Control parameters of MPPI and SMPPI. Note that the action smoothing cost with \(\mathbf{\omega}\) only applies to SMPPI.
randomly split the collected data \(\mathcal{D}\) into the training and test sets with a 7:3 ratio. The PENN is trained using the Adam optimizer with a learning rate of 0.001. We save the model that shows the best test sets performance during 10 epochs, and use this model as the robot dynamics \(\mathbf{F}\) in the next iteration.
**Experimental Results.**
We first evaluate how much of the state-action space each method has covered. We compute the convex hull [52] with the collected state and action data \(\{\mathbf{x}_{t},\mathbf{u}_{t}\}_{t=0}^{N-1}\) at each iteration, and analyze the volume enlargement of the convex hulls [53] (see Fig. 2). The results indicate that JDRX cannot explore the state-action space efficiently. Also, the volumes of ours and RN show comparable magnitudes while consistently exceeding those of US.
We further analyze the collected data points in detail. Similar to as Spielberg et al. [54], we focus on the state envelope of sideslip \(\beta\) and yaw rate \(r\), since precise motion predictions with large sideslip angles and rotational speed are crucial to the success of high-speed driving on sharp curves. The results are shown in Fig 3. JDRX is not able to collect driving data with high yaw rates. This is because it reacts immediately to the trivial information gain near the current state and is unable to plan for higher information gain in the future. On the other hand, ours always covers the largest state space among the compared methods. This is because ours can plan over a longer horizon to gather drifting data by ignoring the aleatoric uncertainty exudes from plain driving. Our method has collected a substantial amount of data with \(\beta\) around 0.2 rad (approximately 11.5 \({}^{\circ}\)), and even data with \(\beta\) around 0.3 rad (approximately 17.2 \({}^{\circ}\)). Lastly, we stress that the data collected by RN is not helpful for achieving the driving tasks, despite it showing a similar enlargement in the convex hull to our method. We will further discuss this in Section IV-B.
### _Direct Deployment_
**Experimental Setup.** We take the resulted PENNs of active exploration as the robot dynamics and directly evaluate their performances in long horizon planning (**Q2**). We load the best models at each iteration and monitor the control performance improvements of the PENNs of the compared methods. The performances are evaluated in a race track with three moderate curves and five sharp curves (see Fig. 3(a)). We set the friction coefficient of the track to 0.7.
In this experiment, the objective of the robot is to drive through the race track at a speed as close to \(v_{\text{target}}\) as possible, while staying inside of the track. We modify the task-dependent cost \(q(\mathbf{x}_{t})\) as follows:
\[q(\mathbf{x}_{t})=\alpha_{1}\text{Track}(\mathbf{x}_{t})+\alpha_{2}\text{Speed }(\mathbf{x}_{t})+\alpha_{3}\text{Stable}(\mathbf{x}_{t}). \tag{21}\]
The track cost and the speed cost are nearly identical to those in the exploration phase. The given map \(\mathbf{M}\) is modified accordingly to the new race track. The target speed \(v_{\text{target}}\) is adjusted to 45 km/h. We add the stabilizing cost that imposes a hard penalty on large sideslip angles \(\beta\) to prevent the robot from losing its stability:
\[\begin{split}\text{Stable}(\mathbf{x}_{t})&=10000\, I\left(\{|\beta|>0.3\}\right)\,,\\ \beta&=-\text{arctan}\left(\frac{v_{y}}{\|v_{x}\|} \right)\,.\end{split} \tag{22}\]
We set the planning horizon to \(T=35\), which is 3.5 s, to compare the performances in long horizon planning. Note that no information gain is provided to the objective function, so the controllers are uncertainty-neutral.
**Experimental Results.** The results are shown in Fig. 4. At the sharpest corners of \(\#3\), \(\#7\), and \(\#8\), in order to drive through them without losing significant speed, the vehicle should gently slow down before entering the corners and drift continuously. If the vehicle cannot perform drifting maneuvers, it will nearly stop in the middle of the corners or even collide with the track boundary. Moreover, if the predictions of the dynamics are not accurate, the vehicle may enter a saturated drifting region and become uncontrollable.
First, we count the number of times that each method completes the whole track without driving outside of the given boundary (see Fig. 3(b)). JDRX is not able to complete the whole lap. Reactive exploration limits the robot to acquiring only insignificant information gain in the vicinity of the current state. As a result, it fails to collect high-speed driving data for training and to produce accurate predictions in states with high speeds. RN shows improvement as it collects more data through repeated iterations. However, it only relies on coincidental actions that lead to collecting novel data. Our method achieves the best results among the compared methods, and its performance steadily improves as more data is collected.
Interestingly, the performance of US improves in the early stages, but after 100 iterations, its performance continuously decreases. It implies that collecting more data does not necessarily improve the prediction performance. Since US cannot separate the source of uncertainty, it may repeatedly collect the same data showing naive motions. Therefore, the model is trained to make increasingly better predictions on these superfluous data while neglecting the data representing important driving characteristics.
We also analyze the control performance in terms of speed (see Fig. 3(c)) and stability (see Fig. 3(d)) during each trial. Our method has effectively regulated the speed cost compared to the other methods. The results also show that RN violates the stabilizing constraints far more frequently than our method. It demonstrates that the collected data from RN do not effectively contribute to achieving the given task, since they omit crucial training data such as high-speed drifting
Fig. 2: The volumes of the convex hulls along exploration iterations.
maneuvers. Only our method successfully drives through the race track at high speeds without losing stability.
### _Uncertainty-Aware Deployment_
**Experimental Setup.** We assumed that the _exploration phase_ was performed in a large open space under human supervision. However, in _deployment phase_, there is no such monitoring to guarantee safety. In this experiment, we validate that our active exploration policy can be transferred to an uncertainty-aware controller with minimal modification.
To highlight the influence of the uncertainty-aware control strategy, we augment the dynamics to be terrain-aware by feeding the elevation map to the neural network. We assume that a 2.5D elevation map with a 0.1 m grid size is provided by one of the existing methods that can generate these maps from raw sensor input in real time [55, 56]. The map encoder network consists of an 8-channel convolutional layer and a 4-channel convolutional layer, both with a kernel size of 3 and a stride of 2 [57]. The output of the encoder is concatenated with the history of state-action pairs and fed into the original MLP model that had been used throughout the previous experiments. We use the data collected by our method, which shows the best performance in direct deployment experiments, to train the augmented dynamics. During training, the elevation map is randomly generated from the Gaussian distribution of \(\mathcal{N}(0,0.2)\).
Following our prior work [58], we design a realistic off-road environment with various components including large bumps and randomly patterned rough terrains (see Fig. 5). We also add non-traversable elements such as trees, bushes, and wooden pillars, and they are included in the given map **M** for collision avoidance.
**Methods Evaluated.** We take the same task-dependent cost function (21) used in the direct deployment experiments. The target speed \(v_{\text{target}}\) is adjusted to 30 km/h. The key difference between these two deployment tasks is that we here include uncertainty in the control objective. We evaluate the following uncertainty-averse strategies:
#### Vi-C1 Uncertainty-Neutral (UN)
a uncertainty neutral strategy that does not take uncertainty into account while planning. It uses the same objective function as the one used in direct deployment experiments.
#### Vi-C2 Naive Penalty (NP)
a simple uncertainty-aware strategy that penalizes a linear cost on quantified uncertainty. It uses (14) as the objective function with \(w_{1}=1000\).
#### Vi-C3 Hybrid
an uncertainty-aware strategy that has both soft and hard costs. It uses (15) as the objective function with \(w_{1}=1000\), \(w_{2}=10000\), and \(\xi=0.15\). Note that the hard cost penalizes states that are estimated to have uncertainty greater than \(\xi\) with the same magnitude as a collision with the track boundary.
#### Vi-C4 Conservative
a conservative strategy that controls the vehicle as stable as possible. It uses (15) as the objective function with \(w_{1}=0\), \(w_{2}=10000\), and \(\xi=0.02\).
**Experimental Results.** We visualize the trajectories taken by the compared methods in Fig. 5. To illustrate the vehicle's stability, we display the rotational impacts exerted on the vehicle during driving onto the trajectories. We simply use angular momentum, obtained by the squared sum of roll rate and pitch rate. We normalize the angular momentum by the maximum angular momentum value across all trials. Therefore, a trajectory with a darker color represents that the vehicle is more unstable. The angular and vertical motions of the vehicle during experiments are shown in Table II for quantitative evaluation.
Although UN can control the vehicle at the reference speed while avoiding collisions with obstacles and staying on track, it fails to circumvent uncertain regions because it is blind to uncertainty. It drives over the bumps at Scenario #1 and Scenario #5, eventually resulting in overturning. NP has opted for terrain with less uncertainty and, as a result, has chosen moderate terrain rather than the high bumps (see Scenario #1 and Scenario #5). However, it fails to circumvent the largest bump at Scenario #3. On the other hand, Hybrid decides to slow down before the bump and take a detour to satisfy the safety criteria, despite the speed penalty. Conservative is a reasonable option in practical applications because it minimizes the damage to the robot. It takes the most secure trajectory as can be seen at Scenario #4 and Scenario #5 due to the low uncertainty threshold. The trajectory taken by
Fig. 3: The scatter plots of collected data during active exploration. The data are from 10, 30, 100, and 300 iterations, respectively. During all iterations, ours using JRD information gain covers the largest state spaces of sideslip angle and yaw rate compared to other methods.
Conservative also shows that the vehicle rarely experienced terrain impacts except at the inevitable bump at Scenario #2. Hybrid shows a good balance between safety and agility. It avoids regions with high degrees of uncertainty, which might irreversibly damage or destroy the vehicle, while navigating the terrains with low levels of uncertainty at the desired speed. Table II demonstrates that the methods with hard uncertainty penalty show a significantly lower degree of undesirable vertical motions and angular motions compared to the other methods.
### _Experiments on Wheeled Robots_
In this section, we conduct a sanity check to ensure that our framework is scalable to other types of robot platforms. We use a 1:5 scale wheeled robotics testbed as the novel target system. We examine our framework in two robotics simulators: Gazebo [59] and Nvidia Isaac Sim [60]. In each simulator, we conduct the active exploration experiments **(Q1)** and direct deployment experiments **(Q2)** with the best performing exploration strategy. Note that we do not modify any settings in our framework except for adjusting the target speed according to the robot's specifications (\(v_{\text{target}}\) = 15 km/h) in both exploration and deployment phases. Other experimental settings are identical to those described in previous sections, including parameters of neural networks, controllers, and objective functions. Fig. 6 shows the simulated environments and the resulting trajectories in deployment tests taken by
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{\#} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Vertical Vel. [m/s]**} & \multicolumn{2}{c}{**Vertical Acc. [m/s\({}^{2}\)]**} & \multicolumn{2}{c}{**Roll Rate [rad/s]**} & \multicolumn{2}{c}{**Pitch Rate [rad/s]**} & \multicolumn{2}{c}{**Roll Acc. [rad/s\({}^{2}\)]**} & \multicolumn{2}{c}{**Pitch Acc. [rad/s\({}^{2}\)]**} \\ \cline{3-13} & & **Mean** & **Max** & **Mean** & **Max** & **Mean** & **Max** & **Mean** & **Max** & **Mean** & **Max** & **Mean** & **Max** \\ \hline
1 & Uncertainty-Neutral (UN) & 2.07 & 28.23 & 0.13 & 1.09 & 0.28 & 2.46 & 0.43 & **1.10** & 2.22 & 30.67 & 1.64 & 12.78 \\
2 & Naive Penalty (NP) & 1.97 & 23.30 & 0.05 & 0.38 & 0.33 & 2.06 & 0.44 & 2.22 & 2.73 & 33.97 & 1.80 & 8.02 \\ \hline
3 & Hybrid & 1.42 & **9.37** & **0.04** & **0.20** & 0.21 & **1.56** & **0.39** & 1.15 & 1.85 & **18.37** & **1.28** & **5.91** \\
4 & Conservative & **1.40** & 10.60 & **0.04** & 0.22 & **0.14** & 1.60 & 0.42 & 1.11 & **1.16** & 22.23 & 1.46 & 6.93 \\ \hline \hline \end{tabular}
\end{table} TABLE II: The average and maximum motions of the vehicle across all trials. The vertical motions include vertical velocity and vertical acceleration. The angular motions include velocities and accelerations of roll and pitch, respectively. Hybrid and Conservative experience lower terrain impacts than UN and NP by a large margin.
Fig. 4: (a) The race track designed in the IPG CarMaker simulator for direct deployment experiments. We visualize the vehicle trajectory taken by our method at 300 iterations for driving 10 laps in a counter-clockwise direction. (b) The number of times that each method completes the whole lap during every 50 iterations. (c) The average speed cost for each saved model along exploration iterations. The shaded areas denote a 95% confidence interval. (d) The number of trials of each method that violates the stabilizing constraints at least once, i.e., having larger sideslip angles than 0.3 rad, during every 50 iterations.
the robots. The results suggest that our framework has the potential to be applied to other robot platforms with continuous state and action space, actively exploring training data, and safely controlling the robot.
## V Conclusion
In this paper, we presented a unified model-based reinforcement learning framework with dynamics learning that bridges active exploration and uncertainty-aware deployment in the robotics control domain. The dynamics is learned using a fully parallelized probabilistic ensemble neural network that is sensitive to uncertainty. For active exploration, the epistemic uncertainty can be quantified by measuring the ensemble disagreement via Jensen-Renyi Divergence. Two opposing task objectives for exploration and deployment can be optimized by the state-of-the-art sampling-based MPC. Our extensive experiments demonstrate that the unified framework can be applied to both autonomous vehicles and wheeled robots. Our framework shows promising results for both exploration and deployment, in that the robot efficiently collects useful training data samples that are essential to achieving the task, and successfully avoids uncertain regions by imposing an uncertainty penalty. We hope that this work will serve as a stepping stone towards more general learning-based robotic applications under uncertainty.
## VI Appendix
### _Additional Details for Learned Dynamics_
To perform an ablation study on the neural network dynamics, we collect a human-controlled driving dataset. We build a race track in the IPG CarMaker simulator, modeled after a kart circuit located in Kirchlengern, Germany [43]. The length of the track is 1016 m and it has two moderate curves and four sharp curves. The overall structure of the track is similar to that in Fig. 6c. The driving data collected on the track consists of 1) zig-zag driving at low speeds, 2) high-speed driving, and 3) sliding maneuvers, in both clockwise and counter-clockwise directions. We collected a total of 70 minutes of data at a rate of 10 Hz. We split the data into 70\(\%\) for training and 30\(\%\) for testing after randomizing the data to remove temporal correlations.
We evaluate the test performance of the compared models with respect to each state variable in \(\mathbf{x}=(v_{x},v_{y},r)^{\top}\), which are the longitudinal velocity \(v_{x}\), the lateral velocity \(v_{y}\), and the yaw rate \(r\). We use the Root Mean Square Error (RMSE) as the performance metric. We also display the average test RMSE for the convenience of comparison.
**Model Comparison.** First, we compare the pure MLP model and the Recurrent Neural Network (RNN) model. In this experiment, we use a deterministic model without ensemble and use L2 loss function. The other parameters of the MLP model follow those mentioned in Section IV. In RNN models, we replace the first layer of the MLP model as GRU [47] and LSTM [46], respectively, with the same hidden layer size as the MLP model. The best test performances of each model during 1000 epochs are shown in Table III. The RNN models show slightly better performances than the MLP model. However, we choose MLP to benefit from the parallelizable structure of the ensemble MLP model.
**History Comparison.** In this experiment, we use PENN model as described in Section III-D. We evaluate the perfor
Fig. 5: Visualization of uncertainty-aware navigation results on the vehicle simulator. We display the rotational impacts exerted on the vehicle onto the trajectories. The maximum value of rotational impact during a three-second window is used for visual clarity.
mances of the PENNs with different lengths of state-action history \(H\). The results are shown in Table IV. The lengths of 3, 4, and 5 produce the best results among the compared models. When the history length exceeds 5, the prediction performance generally decreases. In accordance with prior literature employing the same strategy [44, 45, 54, 61], we determine the history length to be 4.
### _Sim-to-Real Transfer_
We successfully transferred our algorithm to real-world settings for uncertainty-aware deployment tasks. We integrated our algorithm with global path planning and online traversability map generation using a LiDAR sensor. These experiments were conducted on our off-road testbeds. The experimental results can be found on our project page: [https://taekyung.me/rss2023-bridging](https://taekyung.me/rss2023-bridging).
## VII Acknowledgment
This work was supported by the Korean Government (2023).
|
2305.03365 | Repairing Deep Neural Networks Based on Behavior Imitation | The increasing use of deep neural networks (DNNs) in safety-critical systems
has raised concerns about their potential for exhibiting ill-behaviors. While
DNN verification and testing provide post hoc conclusions regarding unexpected
behaviors, they do not prevent the erroneous behaviors from occurring. To
address this issue, DNN repair/patch aims to eliminate unexpected predictions
generated by defective DNNs. Two typical DNN repair paradigms are retraining
and fine-tuning. However, existing methods focus on the high-level abstract
interpretation or inference of state spaces, ignoring the underlying neurons'
outputs. This renders patch processes computationally prohibitive and limited
to piecewise linear (PWL) activation functions to great extent. To address
these shortcomings, we propose a behavior-imitation based repair framework,
BIRDNN, which integrates the two repair paradigms for the first time. BIRDNN
corrects incorrect predictions of negative samples by imitating the closest
expected behaviors of positive samples during the retraining repair procedure.
For the fine-tuning repair process, BIRDNN analyzes the behavior differences of
neurons on positive and negative samples to identify the most responsible
neurons for the erroneous behaviors. To tackle more challenging domain-wise
repair problems (DRPs), we synthesize BIRDNN with a domain behavior
characterization technique to repair buggy DNNs in a probably approximated
correct style. We also implement a prototype tool based on BIRDNN and evaluate
it on ACAS Xu DNNs. Our experimental results show that BIRDNN can successfully
repair buggy DNNs with significantly higher efficiency than state-of-the-art
repair tools. Additionally, BIRDNN is highly compatible with different
activation functions. | Zhen Liang, Taoran Wu, Changyuan Zhao, Wanwei Liu, Bai Xue, Wenjing Yang, Ji Wang | 2023-05-05T08:33:28Z | http://arxiv.org/abs/2305.03365v1 | # Repairing Deep Neural Networks Based on Behavior Imitation
###### Abstract
The increasing use of deep neural networks (DNNs) in safety-critical systems has raised concerns about their potential for exhibiting ill-behaviors. While DNN verification and testing provide post hoc conclusions regarding unexpected behaviors, they do not prevent the erroneous behaviors from occurring. To address this issue, DNN repair/patch aims to eliminate unexpected predictions generated by defective DNNs. Two typical DNN repair paradigms are retraining and fine-tuning. However, existing methods focus on the high-level abstract interpretation or inference of state spaces, ignoring the underlying neurons' outputs. This renders patch processes computationally prohibitive and limited to piecewise linear (PWL) activation functions to great extent. To address these shortcomings, we propose a behavior-imitation based repair framework, BIRDNN, which integrates the two repair paradigms for the first time. BIRDNN corrects incorrect predictions of negative samples by imitating the closest expected behaviors of positive samples during the retraining repair procedure. For the fine-tuning repair process, BIRDNN analyzes the behavior differences of neurons on positive and negative samples to identify the most responsible neurons for the erroneous behaviors. To tackle more challenging domain-wise repair problems (DRPs), we synthesize BIRDNN with a domain behavior characterization technique to repair buggy DNNs in a probably approximated correct style. We also implement a prototype tool based on BIRDNN and evaluate it on ACAS Xu DNNs. Our experimental results show that BIRDNN can successfully repair buggy DNNs with significantly higher efficiency than state-of-the-art repair tools. Additionally, BIRDNN is highly compatible with different activation functions.
DNN repair/patch, behavior imitation, retraining, fine-tuning, fault localization
## I Introduction
Deep neural networks (DNNs) have become a leading candidate computation model for deep learning in recent years and have achieved enormous success in various domains, including computer vision [1] and natural language processing [2, 3]. Despite their popularity and success, DNNs are not infallible and can produce erroneous outputs, such as incorrect predictions, classifications, or identifications. These unexpected errors can lead to system failures [4, 5, 6], wrongful arrests [7, 8], and even loss of life [9]. As a result, the unexpected behaviors of DNNs make their application in safety-critical domains, such as autonomous vehicles, a significant and pressing concern.
To earn users' trust and alleviate their concerns, it is critical to ensure that DNNs meet specific requirements and produce expected outputs before deployment. Recent advancements in _DNN verification_, _DNN testing_, and _DNN property-guided training_ have highlighted the importance of this issue. Diverse DNN verification techniques [10, 11] have been proposed to verify whether a given DNN adheres to specific properties such as robustness [12] or safety [13]. These techniques range from abstract interpretation [14, 15, 16, 17], SMT/SAT theory [18, 19] to linear programming [20, 21, 22]. Meanwhile, DNN testing [23, 24] evaluates the behaviors of DNNs through numerous test cases and has made significant progress in coverage criteria [25] and case generation [26, 27] recently. However, both verification and testing are post hoc analyses, and their qualitative or quantitative results do not provide comfort when properties are violated. Conversely, property-guided training methods [28, 29] focus on training DNNs that are correct-by-construction during the training phase. Moreover, much attention is being paid to training methods for developing robust DNNs.
Although pre-trained DNNs are highly effective in many applications, they can struggle to handle erroneous behaviors. This leads us to the main topic of our paper: DNN repair, which is also sometimes called DNN patching. When a DNN is found to be behaving poorly (as detected by DNN verification or testing), DNN repair or patching aims to eliminate these unexpected behaviors while minimizing the impact on performance. There are currently two main repair paradigms.
On the one hand, DNN retraining is a powerful tool for correcting erroneous outputs of DNNs and is similar to property-guided training. Retraining-based methods [30, 31] remove mistakes by evolving existing buggy DNNs, while property-guided training avoids specification violations from the beginning. The key to successful DNN retraining is balancing the need to maintain original performance while eliminating unexpected behaviors, achieved by using appropriate loss functions. However, retraining can be computationally expensive,
and original training datasets are not always publicly available. Additionally, undifferentiated parameter updates during retraining can introduce new bad behaviors.
On the other hand, unlike the global and aimless parameter modification in DNN retraining, _DNN fine-tuning_[32] devotes to repairing buggy DNNs locally, intentionally and slightly, i.e., patching a specific subset of the DNN parameters to get rid of erroneous predictions. Initially, fine-tuning based methods [33, 34] typically convert the adjustment of DNN parameters located in a DNN layer (arbitrary or well-selected) into a constraint solving problem of parameters and minimize parameter modifications then. Compared to the adjustment of parameters on a single layer, some recent work has taken fault localization into account [35, 36], which is utilized to determine a parameter subset that has a more significant impact on error behaviors, to make fine-tuning processes more intentionally. The fine-tuning methods generally outperform the retraining ones due to their slight and local modifications.
When considering either the retraining based or the fine-tuning based DNN repair methods, current attention is primarily directed towards high-level abstract interpretation or inference of state spaces of DNN layers, while neglecting behaviors of the underlying neurons. This leads to existing repair methods being computationally prohibitive and restricting the range of repaired DNNs. For instance, repairing DNNs through linear mapping regions necessitates the expensive calculation of the sets of polytope abstract domains, and non-piecewise linear activation functions are limited in the effectiveness.
To overcome these dilemmas, this paper proposes a behavior-imitation based DNN repair framework, BIRDNN (**B**ehavior-**I**mitation Based **R**epair of **DNNs**). In essence, we investigate the (hidden) states of neurons within DNNs, analyzing neurons' behavior differences between the expected behaviors on positive samples and the unexpected behaviors on negative samples. Resorting to behavior imitation, we present alternative retraining based and fine-tuning based repair methods to patch defective DNNs, and BIRDNN is the first repair framework unifying DNN retraining and DNN fine-tuning together. For the retraining based method, we assign new correct labels to negative sample inputs, imitating the outputs of nearby positive samples, and then retrain original DNNs to improve erroneous behavior. For the fine-tuning based method, fault localization identifies the most responsible neurons for unexpected predictions by analyzing behavior differences on positive and negative samples. Then we utilize particle swarm optimization (PSO) to modify those "guilty" neurons to maintain the original DNN performance while minimizing property violations. Our paper aims to tackle the more difficult domain-wise repair problems (DRPs) and to this end, we also integrate BIRDNN with a sampling based technique to character DNN behaviors over domains, repairing buggy DNNs in a probably approximated correct style.
**Contributions.** Main contributions of this paper are listed as follows.
* Based on the investigation on neuron behaviors and the insight of behavior imitation, we propose a novel DNN repair framework, BIRDNN, which supports alternative retraining style and fine-tuning style repair simultaneously for the first time, to our best knowledge.
* Within BIRDNN, DNNs imitate the expected behaviors of the closest positive samples when encountering negative inputs during retraining procedures. For the fine-tuning method, BIRDNN adopts a straightforward and efficient fault localization strategy on behavior difference analysis to identify the most responsible neurons.
* To tackle the more challenging DRPs, we integrate BIRDNN with a characterization technique of DNN domain behaviors, reducing DRPs to sample-based repair problems and repairing buggy DNNs in a probably approximated correct style.
* We have implemented a prototype tool based on BIRDNN, which is available from [https://github.com/ByteTao5/BIRDNN](https://github.com/ByteTao5/BIRDNN). The experimental results on ACAS Xu DNNs illustrate the effectiveness, efficiency and compatibility of our proposed DNN patch framework.
The remainder of this paper is organized as follows. Section II provides an introduction of necessary background. Afterward, Section III formulates the concerned and addressed domain-wise repair problems. Following this, Section IV detailedly demonstrates our behavior-imitation based DNN repair framework, BIRDNN. Next, Section V implements a prototype tool and reports its effectiveness, efficiency and compatibility on the widely-used ACAS Xu DNNs versus state-of-the-art methods. Section VI supplements some related work and comparisons. Finally, Section VII summarizes this paper and gives possible future directions.
## II Preliminaries
When training or testing DNNs, complex data flows arise and property specifications impose specific requirements on the generated data flows. In the following, we present some preliminaries related to DNNs and property specifications.
### _Deep Neural Networks_
An \(L\)-layer DNN usually contains an input layer, an output layer, and \(L-2\) successive hidden layers. A DNN specified with the parameter set \(\theta\) is denoted by \(\mathcal{N}_{\theta}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\), containing \(m\) and \(n\) neurons in its input and output layers, which are termed the input/output dimensions respectively. That is to say, the layer dimension \(d_{i}\) refers to the number of neurons located on the \(i\)-th DNN layer. Beginning with an input (or, sample) \(\mathbf{x}\in\mathbb{R}^{m}\) on the input layer, it comes into the _forward propagation_[37]. Between two adjacent DNN layers, a non-linear activation function follows an affine transformation, whereas only an affine transformation exists for the last two layers. The common activation functions are ReLU, tanh and Sigmoid, where the first one is piecewise linear (PWL) and the others are strictly non-linear. Generally, PWL functions are easier to tackle due the nice algebraic property. The whole forward propagation process is the composition of the operations between each two adjacent layers and it
generates the output (or, prediction) \(\mathbf{y}:=\mathcal{N}_{\theta}(\mathbf{x})\in\mathbb{R}^{n}\) with respect to the sample \(\mathbf{x}\).
There appears a data flow in DNN \(\mathcal{N}_{\theta}\) when processing the input \(\mathbf{x}\). We name the data flow as the _behavior_ of the DNN \(\mathcal{N}_{\theta}\) on the sample \(\mathbf{x}\) and it consists of the input vector, hidden states for each hidden layer and the output vector. The DNN behaviour on sample \(\mathbf{x}\) can be represented with \(\{\mathcal{N}_{\theta}^{0}(\mathbf{x})\) (i.e., \(\mathbf{x}\)), \(\mathcal{N}_{\theta}^{1}(\mathbf{x})\), \(\cdots\), \(\mathcal{N}_{\theta}^{L-1}\) (i.e., \(\mathbf{y}\) or \(\mathcal{N}_{\theta}(\mathbf{x})\))\(\}\).
### _Property Specifications_
Before the deployment in practical applications, DNNs are required to satisfy certain properties, such as robustness [12, 38], safety [13], reachability [39, 30], fairness [40] and so on, especially in safety-critical domains. Consequently, property specifications emerge as the times demand and they are widely utilized in DNN property-guided training [41] and DNN verification [42] related fields.
A property specification \(\mathcal{P}\) w.r.t. a DNN \(\mathcal{N}_{\theta}\) generally consists of two components, a _pre-condition_\(\varphi\) and a _post-condition_\(\psi\), i.e., \(\mathcal{P}:=\{\varphi;\psi\}\). The pre-condition \(\varphi\) is a constraint on the input domain \(\mathbb{R}^{m}\) and it describes a _predetermined input set_ of \(\mathcal{N}_{\theta}\). Likewise, the post-condition \(\psi\) restricts the output domain \(\mathbb{R}^{n}\), representing a _desired output region_. Then, the specification \(\mathcal{P}\) imposes a behavior requirement for \(\mathcal{N}_{\theta}\) that it should ideally map the predetermined input set into the desired output region (i.e, expected behaviors), or the DNN is buggy. Further, the relation \(\varphi\stackrel{{\mathcal{N}_{\theta}}}{{\rightarrow}}\psi\) refers that DNN \(\mathcal{N}_{\theta}\) satisfies the property specification while \(\varphi\stackrel{{\mathcal{N}_{\theta}}}{{\rightarrow}}\psi\) means the property specification does not hold on the DNN.
For a specification set \(\{\mathcal{P}_{i}\}_{i=1}^{p}\) that the DNN \(\mathcal{N}_{\theta}\) should obey, likewise, \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\) and \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\) respectively denotes the satisfiability and unsatisfiability of the specification set on DNN \(\mathcal{N}_{\theta}\) and these two relations are defined as
\[\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\Leftrightarrow\varphi_{i}\stackrel{{ \mathcal{N}_{\theta}}}{{\rightarrow}}\psi_{i},\forall i\in\{1,2,\cdots,p\},\]
and
\[\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\Leftrightarrow\varphi_{i}\stackrel{{ \mathcal{N}_{\theta}}}{{\rightarrow}}\psi_{i},\exists i\in\{1,2,\cdots,p\}.\]
That is, a specification set holds on a DNN if and only if each specification in the set holds on the DNN and it does not hold means that at least one specification in the set is violated. The behaviors meeting the relation \(\stackrel{{\mathcal{N}_{\theta}}}{{\rightarrow}}\) (resp. \(\stackrel{{\mathcal{N}_{\theta}}}{{\rightarrow}}\)) are _expected_ (resp. _unexpected_) _behaviors_. Further, the DNNs featuring unexpected behaviors are called _buggy DNNs_.
## III Problem Formulation
Based on aforementioned preliminaries, the _DNN repair problem_ comes readily and it is defined as follows.
**Problem 1**: _(DNN Repair Problem). Given a buggy DNN \(\mathcal{N}_{\theta}\) with respect to a violated specification set \(\{\mathcal{P}_{i}\}_{i=1}^{p}\), i.e., \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\), the DNN repair problem refers to modifying the parameter set \(\theta\) to \(\theta^{\prime}\) and the resulting DNN \(\mathcal{N}_{\theta^{\prime}}\) satisfying all the specifications, i.e., \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta^{\prime}}} }{{\rightarrow}}\{\psi_{i}\}_{i=1}^{p}\)._
Additionally, various DNN repair problems arise depending on the set types, such as point sets, domain sets or others, described by the pre-conditions and post-conditions. Among these, domain sets prove to be more challenging due to the infinity and non-ergodicity of samples within domains, and only several existing repair tools support to patch DNNs over domains [34]. Consequently, this paper focuses on repairing DNNs with domain-based specifications and refines Problem 1 into the following DNN repair problem.
**Problem 2**: _(Domain-wise Repair Problem, DRP). Given a buggy DNN \(\mathcal{N}_{\theta}\) and \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta}}}{{ \rightarrow}}\{\psi_{i}\}_{i=1}^{p}\). For each specification \(\mathcal{P}_{i}\), its pre-condition \(\varphi_{i}\) and post-conditions \(\psi_{i}\) describe domain sets. The domain-wise repair problem (DRP) refers to modifying the parameter set \(\theta\) to \(\theta^{\prime}\) and forming a new neural network \(\mathcal{N}_{\theta^{\prime}}\) satisfying all the domain-based specifications, i.e., \(\{\varphi_{i}\}_{i=1}^{p}\stackrel{{\mathcal{N}_{\theta^{\prime}}} }{{\rightarrow}}\{\psi_{i}\}_{i=1}^{p}\)._
For DRPs, input and output domains can be specified as interval boxes, polytopes, and so on. When the behaviors of a DNN violate the property specifications, the regions in the input set that cause the violation are considered _negative input domains_ (_negative domains_, for short), denoted by \(S_{nd}\). Therefore, domain-wise repair problems, patching a buggy DNN \(\mathcal{N}_{\theta}\) with respect to a given specification set \(\{\mathcal{P}_{i}\}_{i=1}^{n}\), are essentially equivalent to correcting the unexpected behaviours of \(\mathcal{N}_{\theta}\) over \(S_{nd}\) fundamentally.
Actually, a subtle modification on the DNN parameters may result in a non-negligible impact on the original performance and even introduce extra unexpected behaviors. Consequently, in addition to the requirement of correcting the defective DNNs on certain specification set, it is equally significant for DNN repair approaches to preserve the parameter set or original performance of the buggy DNNs to great extent. This requirement is referred as the _minimal patch requirement_.
**Definition 1**: _(Minimal Patch Requirement, MPR). For a DNN repair problem, the generated patch between the original DNN \(\mathcal{N}_{\theta}\) and the modified DNN \(\mathcal{N}_{\theta^{\prime}}\) should be minimized, minimizing the impact on the original performance._
To this end, some seek to minimize the deviation between \(\theta\) and \(\theta^{\prime}\), i.e., minimizing the \(\ell\)-distance \(||\theta^{\prime}-\theta||_{\ell}\)[30, 34]. Yet, some pursues minimizing the performance difference between \(\mathcal{N}_{\theta}\) and \(\mathcal{N}_{\theta^{\prime}}\)[31, 36, 43], such as minimizing the accuracy difference \(|acc(\mathcal{N}_{\theta^{\prime}})-acc(\mathcal{N}_{\theta})|\) for classification tasks. MPR is popularly adopted in DNN repair scenarios, and therefore, in what follows, we focus on tackling Problem 2 with considering MPR simultaneously.
## IV BIRDNN Repair Framework
In this section, we give detailed illustrations on the proposed BIRDNN framework, including the domain behavior description and the behavior-imitation based repair for DRPs, in retraining and fine-tuning styles respectively.
### _Overall Workflows_
The overall workflows of BIRDNN are demonstrated in Fig. 1. Beginning with a buggy DNN, its behaviors over negative domains are characterized from the aspects of positive
and negative samples. Following it, BIRDNN corrects the unexpected predictions of the negative samples towards the nearby expected behaviors during the retraining repair process. Alternatively, BIRDNN localizes the most responsible neurons by investigating their behavior differences for the fine-tuning based approach. Subsequently, a repaired DNN is returned from either the retraining based or the fine-tuning based patch.
### _Domain Behavior Description_
First of all, we start with the sample behaviors. Recall the definitions of expected behaviors and unexpected ones, and thus, the input samples w.r.t. the buggy DNN can be correspondingly categorized into _positive samples_ and _negative samples_. Similarly, we characterize the buggy DNN's behaviors within an input domain from the perspective of positive and negative samples with Monte Carlo sampling. That is, we collect respective sets of positive and negative samples from the negative domains to illustrate the expected and unexpected behaviors over the negative domains, as shown in Fig.1b.
To get rid of the infinity and non-ergodicity of samples within domains, we divide samples within negative domains into the two categories and concentrate on the behaviors of the buggy DNNs on these positive and negative samples when treating the DNN repair problems. The reasons are twofold. On the one hand, the positive and negative predictions generally depict the expected output region and the specification-violated region, depicting the decision boundary of DNNs to some extent. On the other hand, the behaviors of the buggy DNN on the positive samples provide significant guidance for correcting the erroneous behaviors of the original DNNs, i.e., _imitating the expected behaviors of the positive samples_.
A scarce case is that the negative domains absolutely violate the property specification, i.e., we cannot theoretically obtain any positive samples from the negative domains. To overcome this dilemma, we define a relaxed region, termed _domain \(\delta\)-neighbourhood_.
**Definition 2**: _(Domain \(\delta\)-neighbourhood) For an input domain \(D:=[lb_{i},ub_{i}],i\in\{1,2,\cdots,m\}\) and a constant \(\delta>0\), where \(lb_{i}\) (\(ub_{i}\)) is the lower (upper) bound of the \(i\)-th input dimension, its domain \(\delta\)-neighbourhood refers to the region \(\delta_{D}:=[lb_{i}-\delta,ub_{i}+\delta],i\in\{1,2,\cdots,m\}^{*}\)._
The determination of \(\delta\) depends on the value that allows the existence of positive samples in the associated domain \(\delta\)-neighbourhood. Additionally, it is obvious that on the premise that \(\delta\) is a enough small constant, the behaviors of DNNs on the neighborhood samples are similar to the original domain. That is the reason that we collect positive samples in the domain \(\delta\)-neighbourhood, which is the region slightly relaxed from the negative domains specified by the pre-condition.
Back to the domain behavior characterization, the sample-based characterization provides us a probably approximated correct guarantee on repairing buggy DNNs over negative domains. Assuming a sampling process where the probabilities of positive and negative samples are \(q\) and \(1-q\), furthermore, assuming a DNN repair process in which a proportion of \(u\) of negative samples are repaired while a proportion of \(v\) of positive samples disobey the properties, then under a large total sample size, the repaired DNN satisfies the properties with a probability of \(q(1-v)+(1-q)u\).
The above-mentioned \(u\) and \(v\) are termed _property improvement_ and _performance drawdown_, catering to repairing buggy DNNs and minimal patch requirement respectively. The Monte Carlo sampling based characterization technique is widely used in statistical model checking [44, 45] and there exist some bounds of the total sample number [46, 47], like Chernoff bound. The sample-based behavior characterization reduces DRPs to sample-based DNN repair problems. Consequently, the main idea of repairing a DNN \(\mathcal{N}_{\theta}\) with respect to a negative input domain set \(S_{nd}\) (Problem 2) is to correct the unexpected behaviors of negative samples on the negative domains. In what follows, we present our repair approaches to DRPs, in aspects of retraining and fine-tuning styles.
Fig. 1: Workflows of BIRDNN framework. Retraining workflows: (a)-(b)-(c)-(e); Fine-tuning workflows: (a)-(b)-(d)-(e).
### _Retraining Approach to DRP_
Overall, the retraining style patch of DRPs is composed of three procedures, that is, sampleCollect, negativeCorrect and retrainDNN, as shown in Alg.1. The positive and negative samples within the negative domains are collected in sampleCollect, together with their output behaviors, and negativeCorrect (i.e., Alg. 2), subsequently, corrects the unexpected behaviors and prepares the repairing dataset via imitating the expected behaviors of the positive samples. Finally, the procedure retrainDNN retrains the original DNN, pursuing eliminating unexpected behaviors and keeping minimal performance drawdown synergistically.
The procedure sampleCollect is shown in Line 1-7 in Alg. 1. We collect sample sets \(S_{p}\) and \(S_{n}\), composed of positive and negative samples from the negative domains extracted from the violated properties. It is notable that maybe a domain \(\delta\)-neighbourhood is needed, and the \(\delta\) value should not only be as small as possible, but ensure a required number of positive samples being sampled. In the meanwhile, we also record their output behaviors \(Y_{p}\) and \(Y_{n}\) concerning the buggy DNN for the subsequent negative behavior correction.
Alg. 2 demonstrate the procedure negativeCorrect detailedly (Line 8 in Alg. 1). Due to the characteristic of the DNN training process, i.e., the loss functions are defined on the output layer. Here, we only focus on the final part of the DNN behaviors, the predictions (i.e., \(\mathcal{N}_{\theta}(\cdot)\) or \(\mathcal{N}_{\theta}^{L-1}(\cdot)\)), and eliminate the negative behaviors via imitating their surrounding positive behaviors. For each negative prediction \(\mathbf{y}_{n}\in Y_{n}\), the top \(k\) closest positive predictions to it are selected and their mean value is assigned as the new label for the corresponding negative sample if the mean value satisfies the post-condition that \(\mathbf{y}_{n}\) violates. Or, the closest positive prediction is chosen as the candidate label for the following DNN retaining. Obviously, the presented label assignment ensures the correctness of the newly assigned label for each negative input sample. With the calibrated labels, the repairing dataset \(\mathcal{D}_{re}\) is done in Line 9 in Alg. 1. This statement is visualized in Fig. 0(c).
When comes to the final procedure retrainNN with training dataset \(\mathcal{D}\) and repairing dataset \(\mathcal{D}_{re}\), Line 11-13 of Alg. 1, it is of vital importance to clarify the formulation of _loss function_ utilized for the retraining process.
#### Iii-C1 Loss formulation for DRPs
The repairing dataset \(\mathcal{D}_{re}:=\{\mathbf{x}_{re}^{i},\mathbf{l}_{re}^{i}\}_{i=1}^{\#\mathcal{D}_{re}}\) is constructed to eliminate the unexpected behaviors on the negative domain set \(S_{nd}\). The retraining process aims to minimize the discrepancy between the \(\mathcal{N}_{\theta}\)'s predictions and the corrected labels, which is formulated in Eqn. (1)
\[\mathcal{L}_{\text{DRP}}(\theta^{\prime}):=\sum_{i=1}^{\#\mathcal{D}_{re}}|| \mathcal{N}_{\theta^{\prime}}(\mathbf{x}_{re}^{i})-\mathbf{l}_{re}^{i}||_{\ell}, \tag{1}\]
where \(||\cdot||_{\ell}\) is the \(\ell\)-norm distance. It can be seen that the unexpected behaviors on the negative samples are thoroughly eliminated under the condition that the loss function defined in Eqn. (1) is minimized to 0 during the retraining process, which is called provable DNN repair [34].
#### Iii-C2 Loss formulation for MPR
Different from traditional DNN training, MPR requires the modified DNN \(\mathcal{N}_{\theta^{\prime}}\) to preserve the original performance of \(\mathcal{N}_{\theta}\). Consequently, we also take the original input-output dataset \(\mathcal{D}:=\{(\mathbf{x}^{i},\mathbf{l}^{i})\}_{i=1}^{\#\mathcal{D}}\) into account and pursue the minimal performance drawdown by loss function \(\mathcal{L}_{\text{MPR}}(\theta^{\prime})\), which is shown in Eqn.(2).
\[\mathcal{L}_{\text{MPR}}(\theta^{\prime}):=\sum_{i=1}^{\#\mathcal{D}}|| \mathcal{N}_{\theta^{\prime}}(\mathbf{x}^{i})-\mathbf{l}^{i}||_{\ell} \tag{2}\]
Therefore, for procedure retrainDNN, dealing DRPs with MPR is a multi-objectives optimization and the final loss function can be formulated as
\[\underset{\theta^{\prime}}{\text{minimize}}(\alpha\cdot\mathcal{L}_{\text{ DRP}}(\theta^{\prime})+\beta\cdot\mathcal{L}_{\text{MPR}}(\theta^{\prime})) \tag{3}\]
where \(\alpha,\ \beta\in[0,1]\), \(\alpha+\beta=1\) and \(\theta^{\prime}\) is initialized as \(\theta\). The retraining process patches the original DNN \(\mathcal{N}_{\theta}\) without considering MPR when the configuration is \(\alpha=1,\ \beta=0\) and it only focuses on minimal parameter change with the
configuration \(\alpha=0,\ \beta=1\). The procedure retrainDNN terminates if the negative samples meet the specifications or the maximum iteration number is reached, and a repaired DNN \(\mathcal{N}_{\theta^{\prime}}\) is returned in Line 13 of Alg. 1, as shown in Fig. (e)e.
### _Fine-tuning Approach to DRPs._
Except for the retraining based repair for DRPs, BIRDNN also provides an approach of fine-tuning style patch for DRPs. The algorithm is demonstrated in Alg. 3. Likewise, the algorithm can be divided into three procedures, i.e., sampleCollect, behaveAnalyze and fineTuneDNN. After collecting the sets of positive and negative samples in procedure sampleCollect (Line 1-4), procedure behaveAnalyze (Line 6) completes the fault localization by analysing the discrepancies between the behaviors on the negative samples (unexpected behaviors) and those on the positive samples (expected behaviors). Then, fineTuneDNN (Line 9-13) tunes the weights of most responsible neurons for the unexpected behaviors with optimization methods.
Compared with only focusing on the final predictions in Section IV-C, BIRDNN makes a more insightful investigation on neuron behaviors in procedure behaveAnalyze, which is illustrated in Alg. 4. Recall the definition of behaviors, the behaviors of DNN \(\mathcal{N}_{\theta}\) on a negative input sample \(\mathbf{x}_{n}\) and the positive sample set \(S_{p}\) are respectively denoted by
\[\{\mathbf{x}_{n},\mathcal{N}_{\theta}^{1}(\mathbf{x}_{n}),\mathcal{N}_{\theta}^{2}( \mathbf{x}_{n}),\cdots,\mathcal{N}_{\theta}^{L-1}(\mathbf{x}_{n})\}\]
and
\[\{\mathbf{x}_{p}^{j},\mathcal{N}_{\theta}^{1}(\mathbf{x}_{p}^{j}),\mathcal{N}_{\theta }^{2}(\mathbf{x}_{p}^{j}),\cdots,\mathcal{N}_{\theta}^{L-1}(\mathbf{x}_{p}^{j})\}_{j=1 }^{\#S_{p}}.\]
Subsequently, the _layer responsibility_\(\mathbf{r}^{i}\) of the neurons on the \(i\)-th layer for the unexpected behaviors are defined as follows,
\[\mathbf{r}^{i}:=\sum_{j=1}^{\#S_{p}}|\mathcal{N}_{\theta}^{i}(\mathbf{x}_{n})-\mathcal{ N}_{\theta}^{i}(\mathbf{x}_{p}^{j})|,\ i\in\{1,2,\cdots,L-1\}. \tag{4}\]
Further, the _neuron responsibility_\(\mathbf{r}^{ij}\ (0\leq j<d_{i})\) indicates to what extent, the \(j\)-th neuron on the \(i\)-th layer is responsible for the unexpected behaviors, and we name \(\mathbf{R}:=\{\mathbf{r}^{i}\}_{i=1}^{L-1}\) the _responsibility matrix_. Essentially, the main idea of fault localization is also behavior imitation. The responsibility matrix reveals the behavior difference of each neuron over negative and positive samples and the larger a neuron's behavior difference, the more necessary for it to imitate expected behaviors to minimize the discrepancy. Since there is no need to change the input layer when repairing a buggy DNN, and thus it is unnecessary to define the responsibility metric for the input neurons1. Likewise, these statements are visualized in Fig. (d)d.
Footnote 1: Actually, the defined responsibility metric also works for the input neurons.
In addition, we also modify Eqn. (4) to Eqn. (5) to break out the two loop in Alg. 4 (Line 3-5), improving the efficiency of the fault localization further.
\[\mathbf{r}^{i}:=|\sum_{t=1}^{\#S_{n}}\mathcal{N}_{\theta}^{i}(\mathbf{x}_{n})-\sum_{j =1}^{\#S_{p}}\mathcal{N}_{\theta}^{i}(\mathbf{x}_{p}^{j})|,\ i\in\{1,\cdots,L-1\}. \tag{5}\]
After the computation of the responsibility matrix, procedure fineTuneDNN (Line 9-13 in Alg. 3) sorts the responsibility matrix and selects the most responsible (largest) \(r\) neurons, whose weights are to be optimized subsequently. As for the optimization method, we choose Particle Swarm Optimization (PSO) [48] for its convergence speed and accuracy in continuous optimization, and other algorithms are also alternative, like Differential Evolution (DE) [49], Markov Chain Monte Carlo (MCMC) [50] and Pelican Optimization Algorithm [51].
```
1:a buggy DNN \(\mathcal{N}_{\theta}\), positive sample set \(S_{p}\), negative sample set \(S_{n}\), initialized responsibility matrix \(\mathbf{R}\).
2:computed responsibility matrix \(\mathbf{R}\).
3:procedure behaveAnalyze(\(\mathcal{N}_{\theta},S_{p},S_{n},\mathbf{R}\))
4:for\(i\gets 1\)to\(L-1\)do
5:for each\(\mathbf{x}_{n}\) in\(S_{n}\)do
6:for each\(\mathbf{x}_{p}\) in\(S_{p}\)do
7:\(\mathbf{R}^{i}\leftarrow\mathbf{R}^{i}+|\mathcal{N}_{\theta}^{i}(\mathbf{x}_{n})-\mathcal{ N}_{\theta}^{i}(\mathbf{x}_{p})|\)
8:return\(\mathbf{R}\)
```
**Algorithm 4** Fault Localization of Responsible Neurons
Considering multiple particles in the search space, \(\vec{x_{i}}\) and \(\vec{v_{i}}\) stand for the location and velocity of each particle. During each search iteration, PSO updates \(\vec{x_{i}}\) and \(\vec{v_{i}}\) according to a fitness function. The evolution rule update \(\vec{v_{i}}\) based on current velocity \(\vec{v_{i}}\), the best local location previous found \(\vec{p_{i}}\) and the best global location previous found \(\vec{p_{g}}\) and it update \(\vec{x_{i}}\) with current velocity \(\vec{v_{i}}\) and location \(\vec{x_{i}}\) then. The PSO update equation is formulated as follows [52].
\[\begin{split}\vec{v_{i}}&\leftarrow\omega\vec{v_{i}} +R(0,c_{1})(\vec{p_{i}}-\vec{x_{i}})+R(0,c_{2})(\vec{p_{g}}-\vec{x_{i}}),\\ \vec{x_{i}}&\leftarrow\vec{v_{i}}+\vec{v_{i}},\end{split} \tag{6}\]
where \(\omega,\ c_{1},\ c_{2}\) respectively represent the inertia weight, cognitive parameter and social parameter. \(R(0,c)\) refers to a random value sampled from \([0,c]\). More importantly, the fitness function in PSO determines the best location.
Come back to Line 12 of procedure fineTuneDNN, the weights of the most responsible \(r\) neurons are the "particles" for PSO and their locations are initialized as the original value in \(\theta\), their velocities are set as zeros. The fitness function \(\mathcal{F}\) for parameter optimization is defined following.
\[\mathcal{F}:=\alpha\cdot\text{unexpBeh}+\beta\cdot\text{drawDown} \tag{7}\]
where \(\alpha,\ \beta\in[0,1]\) and \(\alpha+\beta=1\). _unexpBeh_ (unexpected behaviors) indicates the percentage of the negative samples violating the specifications and _drawdown_ is the DNN performance loss, such as the dropout of classification accuracy. The construction of Eqn. (7) means the PSO attempts to minimize both the unexpected behaviors and the performance drawdown. Moreover, the algorithm terminates if there generates an intolerable performance degeneration between the original and modified DNNs, corresponding to MPR, or the maximum search number is reached and returns the repaired DNN \(\mathcal{N}_{\theta}\). in Line 13 of Alg. 3, as shown in Fig. 0(e).
## V Experiments
### _Experiment Setup_
In this section, the proposed BIRDNN framework is evaluated and compared with some state-of-the-art DNN repair works on the ACAS Xu DNN repair problem, in aspects of retraining and fine-tuning styles, which are respectively termed _BIRDNN-RT_ and _BIRDNN-FT_ hereafter.
#### V-A1 ACAS Xu DNN repair problem
The ACAS Xu DNN repair problem, patching ACAS Xu DNNs [53] with respect to certain safety properties, is a widely utilized and evaluated DNN domain-wise repair problem. ACAS Xu DNNs contains an array of 45 DNNs (organized as a \(5\times 9\) array) that produce horizontal maneuver advisories of the unmanned version of Airborne Collision Avoidance System X, which is a highly safety-critical system developed by the Federal Aviation Administration. More concretely, these DNNs take a five-dimensional input, representing the scenario around the aircraft, and output a five-dimensional prediction, indicating five possible advisories. The ACAS Xu DNNs are associated with 10 safety properties, \(\phi_{1}\)-\(\phi_{10}\), and their pre-/post-conditions are described as polytope sets, satisfying the definition of DRPs. These safety properties require that the output corresponding to an input within a specific input region must fall in the given safe polytope(s). Some verification works [54, 18] discovery that among these 45 DNNs, there are 35 DNNs violate at least one safety property, e.g., \(N_{2,9}\) violates the safety property \(\phi_{8}\).
We follow the ACAS Xu DNN evaluation cases adopted in previous work Veritex [55], CARE [36] and PRDNN [34]. Herein, as classified in Veritex, we select one simple test case \(N_{3,3}\), and two hard test cases, \(N_{1,9}\) and \(N_{2,9}\) to evaluate the BIRDNN framework. The detailed information of these used DNNs is displayed in Table I (_DNN\({}_{1}\)_, _DNN\({}_{2}\)_ and _DNN\({}_{3}\)_).
#### V-A2 Evaluation platform
All the experiments are conducted on a machine with 12th Gen Intel(R) Core(TM) i9-12900H 2.50 GHz and 16 GB system memory. The codes to reproduce our experimental results are available at [https://github.com/ByteTao5/BIRDNN](https://github.com/ByteTao5/BIRDNN).
#### V-A3 Research questions
We report experimental results to answer the following research questions, demonstrating BIRDNN's effectiveness, efficiency and compatibility:
* _RQ1: Can BIRDNN-RT repair the buggy DNNs successfully? How about its efficiency compared to other retraining based methods?_
* _RQ2: Can BIRDNN-FT repair the buggy DNNs successfully? How about its efficiency compared to other finetuning based methods?_
* _RQ3: What about the compatibility of BIRDNN on DNNs with different activation functions?_
### _Evaluations on BIRDNN-RT_
The retraining performance of BIRDNN-RT on ACAS Xu DNNs is compared with Veritex, a provable retraining repair method. Notably, the original training data of ACAS Xu DNNs are not publicly available online. To make fair comparisons, we follow the experiment settings adopted in Veritex, and we uniformly sample a set of 10K training data and 5K test data from the input space. Veritex does not distinguish the positive and negative samples in the training data explicitly in advance, and the violated safety properties are improved by minimizing the distance between the unsafe output region (exact polytope regions by reachability analysis) and the safety region. In contrast, BIRDNN requires reassigning the negative samples of the training data with the predictions of the nearest positive samples, and we respectively set the proportions of the negative samples and positive samples as 10% and 90% in what follows. Additionally, the weights balancing the original performance and safety improvement are set as \(\alpha=\beta=0.5\). We also introduce an early stop mechanism that the retraining process terminates when the safety improvement keeps stable within 10 successive epochs. The early stop also ensures minimal patch requirement to some extent.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline DNN & \multicolumn{2}{c}{Veritex} & BIRDNN-RT \\ \cline{2-5} & Accuracy & Time & Accuracy & Time \\ \hline _DNN\({}_{1}\)_ & 100\% & 4.96 & 100\% & **1.3** \\ _DNN\({}_{2}\)_ & 100\% & Timeout* (11250.7) & 100\% & **174.70** \\ _DNN\({}_{3}\)_ & 100\% & 2167.16 & 100\% & **87.45** \\ \hline _literature reported result\({}^{\dagger}\)_: & & & & \\ \hline Time & Accuracy & _DNN\({}_{3}\)_ & _DNN\({}_{4}\)_ & \\ \hline ART & 89.08\% \(\sim\) 98.06\% & 67.5 & 72.6 \\ ART-refinement & 88.82\% \(\sim\) 95.85\% & 82.5 & 88.4 \\ \hline \hline \end{tabular}
* The running time limit is set as 3 hours. Here we provide the running time reported in Veritex [55] in the parentheses.
\(\dagger\) The listed results are recorded in Veritex [55].
\end{table} TABLE I: DNN models used in Experiments.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline DNN & Description & Architecture & \#Param \\ \hline _DNN\({}_{1}\)_ & ACAS Xu, \(N_{3,3}\) & 7-layer Relu FFNN & 13,350 \\ _DNN\({}_{2}\)_ & ACAS Xu, \(N_{1,9}\) & 7-layer Relu FFNN & 13,350 \\ _DNN\({}_{3}\)_ & ACAS Xu, \(N_{2,9}\) & 7-layer Relu FFNN & 13,350 \\ _DNN\({}_{4}\)_ & Modified ACAS Xu, \(N_{1,9}\) & 7-layer Tanh FFNN & 13,350 \\ _DNN\({}_{5}\)_ & Modified ACAS Xu, \(N_{1,9}\) & 7-layer LeakyReLU FFNN & 13,350 \\ _DNN\({}_{6}\)_ & Modified ACAS Xu, \(N_{1,9}\) & 7-layer ELU FFNN & 13,350 \\ \hline \hline \end{tabular}
\end{table} TABLE I: DNN models used in Experiments.
_Repair performance._ The performance of BIRDNN-RT on ACAS Xu DNNs is displayed in Table II, versus the state-of-the-art work, Veritex, in terms of the _Accuracy_ on the test data and the _Running Time_. Because the test set contains both positive and negative samples, the _Accuracy_ indicator combines safety improvement and performance drawdown, and when the accuracy indicator reaches 100%, it means complete repair and no performance loss. Moreover, we also list the performance of ART [56], a DNN property-guided training method, for a more comprehensive reference. It can be observed that both Veritex and BIRDNN-RT repair these buggy DNNs successfully, i.e., with an accuracy of 100% in the test data. As for the running time, Veritex takes a large amount of running time, dominantly by the computation of exact output regions, while BIRDNN-RT significantly reduces the time consumption occupied by Veritex, ranging from 73.8% (\(DNN_{1}\)), 95.96% (\(DNN_{3}\)) to 98.45% (\(DNN_{2}\)). Meanwhile, compared with ART (or ART-refinement) method, which starts from scratch to train safe ACAS Xu DNNs, BIRDNN-RT takes a little more time, while reaching higher accuracy.
_Answer RQ1: BIRDNN-RT can successfully repair the buggy DNNs, almost without no performance loss, and it provides much more efficient solutions to DRPs, compared with state-of-the-art works._
### _Evaluations on BIRDNN-FT_
As for patching ACAS Xu DNNs with fine-tuning based method, in this subsection, we mainly evaluate the performance of BIRDNN-FT from the aspects of _cross-layer repair_ and _layer-wise repair_. The former allows us to localize responsible neurons from multiple layers, while the latter restricts the neurons to be tuned must locate on the same layer. We compare BIRDNN-FT with the state-of-the-art tools, CARE and PRDNN, respectively corresponding to the cross-layer repair and layer-wise repair.
#### Iv-C1 Cross-layer Repair Performance
Likewise, we utilize the same setups utilized by CARE herein. For each DNN, a set of 10K negative data are randomly sampled to improve the safety property, and another set of 10K positive samples are randomly collected to evaluate the drawdown of the original performance. Another two sets of the same settings are collected for the testing. We set the number of the neurons to be repaired as 10. During the running of the PSO algorithm, as generally recommended in [57], the parameters are set as \(\omega=0.8,c_{1}=c_{2}=0.41\), and the particle number and the maximum iteration number are 20 and 100, respectively. Moreover, to reduce the searching time, the search terminates if it fails to find a better optimization location in 10 consecutive iterations. The weight configuration of balancing performance drawdown and safety improvement is \(\alpha=0.6,\beta=0.4\).
_Repair Performance._ The performance on the test data of BIRDNN-FT and CARE are compared in Table III, in aspects of _Improvement_, _Drawdown_, _Localization Time_ (_Loc Time_), and _Total Time_. Improvement and Drawdown indicate the repair performance on safety correction and accuracy maintenance, respectively. Localization Time and Total Time are the running time consumed by the fault localization and the whole patching procedures.
From Table III, it can be seen that BIRDNN-FT repairs all the buggy DNNs successfully without performance drawdown, which is a little better than CARE. CARE provides almost complete repair on these DNNs and its patch also loses no original accuracy. As for the running time, our proposed BIRDNN-FT greatly reduces the time consumption, both the localization time and total time. The behavior-imitation based fault localization of BIRDNN-FT is much more efficient than the causality-based one of CARE, significantly reducing about 99.8% computation time. According to the final repair performance versus CARE, however, the proposed lightweight fault localization does not mean the scarification of safety improvement or original performance. Due to the same optimization algorithm (PSO), by contrast, BIRDNN-FT finds a more appropriate set of responsible neurons resorting to behavior analysis, and it reduces the following search time in the PSO algorithm further, which only takes about 4%\(\sim\)8% of the total time taken by CARE.
#### Iv-C2 Layer-wise Repair Performance
When we consider the responsibility matrix row by row, it provides us an indication of which neurons located on a certain layer are necessary to repairing, i.e., we select responsible neurons only in a given layer for repair, named _layer-wise DNN repair_. PRDNN is one of the most representative layer-wise repair works. It is a provable fine-tuning repair method based on the DNN verification techniques with the polytope domain, and it repairs a specific layer of the buggy DNNs without fault localization. The layer-wise repair performance of BIRDNN-FT and PRDNN on \(DNN_{3}\) are displayed in Table IV. It can be observed that PRDNN only succeeds in repairing the buggy DNN on the output layer and fails on other layers. It results from the program hangs caused by dimensionality curse encountered in the computation of exact linear regions. By contrast, BIRDNN-FT works successfully for every layer and can reach almost complete repair (except \(L_{4}\)) without performance drawdown. Likewise, as for the running time, BIRDNN-FT is more efficient than PRDNN.
Moreover, we list the average improvement (_Ave_Improve_) and average drawdown (_Ave_Drawdown_) of BIRDNN-FT, CARE and PRDNN in Table V on the three DNNs. In
\begin{table}
\begin{tabular}{c c c c c} \hline Method & Item & \(DNN_{1}\) & \(DNN_{2}\) & \(DNN_{3}\) \\ \hline \multirow{4}{*}{CARE} & Improvement & 1.0 & 0.9999 & 0.9935 \\ & Drawdown & 0.0 & 0.0 & 0.0 \\ & Loc Time & 35.96 & 41.33 & 36.24 \\ & Total Time & 201.32 & 370.40 & 197.55 \\ \hline \multirow{4}{*}{BIRDNN-FT} & Improvement & 1.0 & **1.0** & **1.0** \\ & Drawdown & 0.0 & 0.0 & 0.0 \\ \cline{1-1} & Loc Time & **0.067** & **0.071** & **0.08** \\ \cline{1-1} & Total Time & **16.80** & **16.74** & **15.95** \\ \hline \end{tabular}
\end{table} TABLE III: Performance Comparisons of BIRDNN-FT and CARE.
particular, PRDNN fails to repair \(\textit{DNN}_{2}\) due to program hangs and the average results are evaluated on repairing \(\textit{DNN}_{1}\) and \(\textit{DNN}_{3}\). As demonstrated in Table V, BIRDNN-FT outperforms the previous methods, CARE and PRDNN, repairing the buggy DNNs completely and with no performance drawdown.
#### V-C3 Hyperparameter Comparisons
Additionally, we make some comparisons on the performance of different hyperparameter selections. We change the balancing weight \(\alpha\) (then \(\beta=1-\alpha\)) to evaluate the cross-layer repair performance of BIRDNN-FT on \(\textit{DNN}_{2}\) and set different repaired neuron numbers for the layer-wise repair on \(\textit{DNN}_{3}\).
The results are shown in Fig. 2. According to Fig. 1(a), the weight \(\alpha\) balancing the performance drawdown and safety improvement only imposes a slight effect on the final results, which coincides with the conclusions reported in CARE. As for the comparisons among different repaired neuron numbers in Fig. 1(b), five neurons are enough for every layer and the cases of one and three neurons fail to repair the buggy DNN on some layers, especially on the fourth and the final layers.
### _Activation Compatibility of BIRDNN_
In this subsection, we make some modifications on the original ACAS Xu DNN \(\textit{DNN}_{2}\) to illustrate the compatibility of BIRDNN. That is, we replace the original PWL activation function ReLU with other different activation functions, Tanh, LeakyReLU and ELU, whose definitions and the coefficient values adopted are displayed in Table VI. Among these Tanh and ELU are non-PWL functions, which means that Veritex and PRDNN cannot tackle them. We term these modified DNNs as \(\textit{DNN}_{4}\), \(\textit{DNN}_{5}\) and \(\textit{DNN}_{6}\), as shown in Table I.
Experiment settings follow the ones utilized in Section V-C and the repair performance of BIRDNN-FT on these DNNs is illustrated in Fig. 3, in terms of safety improvement, performance drawdown and running time. Overall, BIRDNN-FT repairs all the buggy DNNs almost perfectly with little performance drawdown, and the running time is relatively stable. The experimental results demonstrate that BIRDNN possesses excellent compatibility with different activation functions, which is not available for Veritex and PRDNN.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & CARE & PRDNN & BIRDNN-FT \\ \hline Ave\_Improve & 99.6\% & 97.4\% & **100\%** \\ Ave\_Drawdown & 0.15\% & 0.01\% & **0\%** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Average Performance of BIRDNN-FT, CARE and PRDNN.
\begin{table}
\begin{tabular}{c c c} \hline \hline & Definition & Coefficient \\ \hline ReLU & \(\max\)\(\begin{bmatrix}x,0\\ \end{bmatrix}\) & \(-\) \\ Tanh & \(\begin{bmatrix}x^{\prime}-e^{-x}\\ e^{\prime}+e^{-x}\\ x,\ x>0\\ (\alpha x,\ x\leq 0\\ \end{bmatrix}\) & \(-\) \\ ELU & \(\begin{bmatrix}x,&x>0\\ \alpha(e^{x}-1),&x<0\\ \end{bmatrix}\) & \(\alpha=0.5\) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Definitions of Different Activation Functions
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{PRDNN} & \multicolumn{2}{c}{BIRDNN-FT} \\ \hline & ImproveAcc & Time & ImprovAcc & Time \\ \hline \(L_{1}\) & \(-^{*}\) & \(-\) & **100\%/100\%** & **17.83** \\ \(L_{2}\) & \(-\) & \(-\) & **100\%/100\%** & **17.30** \\ \(L_{3}\) & \(-\) & \(-\) & **100\%/100\%** & **17.19** \\ \(L_{4}\) & \(-\) & \(-\) & **99.96\%/100\%** & **17.08** \\ \(L_{5}\) & \(-\) & \(-\) & **100\%/100\%** & **17.80** \\ \(L_{6}\) & 100\%/100\% & 20.50 & 100\%/100\% & **19.57** \\ \hline \hline \end{tabular}
* PRDNN fails to repair the layers except for the final (output) layer.
\end{table} TABLE IV: Layer-wise Repair Performance Comparisons of BIRDNN-FT and PRDNN.
\begin{table}
\begin{tabular}{c c c} \hline \hline & Definition & Coefficient \\ \hline ReLU & \(\max\)\(\begin{bmatrix}x,0\\ \end{bmatrix}\) & \(-\) \\ Tanh & \(\begin{bmatrix}x^{\prime}-e^{-x}\\ e^{\prime}+e^{-x}\\ x,\ x>0\\ (\alpha x,\ x\leq 0\\ \end{bmatrix}\) & \(-\) \\ ELU & \(\begin{bmatrix}x,&x>0\\ \alpha(e^{x}-1),&x<0\\ \end{bmatrix}\) & \(\alpha=0.5\) \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Definitions of Different Activation Functions
Fig. 2: Performance on different hyperparameters.
### _Threats to Validity_
The primary threat is that BIRDNN is essentially a probably approximated correct repair framework. It provides a probabilistic guarantee for DRPs, and this guarantee's confidence is directly related to the sample size.
Another possible threat is the overfitting risk resulting from sample based repair, which widely exists in previous methods. Therefore, different and large-size training/repairing data and test data are adopted to avoid overfitting and evaluate the repaired DNNs' generalization, and we follow their setups.
## VI Related Work
DNN repair has received significant attention in recent decades, and dozens of works on DNN repair have witnessed remarkable advances. In what follows, we introduce the related work from three categories: retraining, fine-tuning without fault localization, and fine-tuning with fault localization.
**Retraining.** Veritex [55] is one of the most relevant works to our research and it presents an approach to repairing unsafe ReLU DNNs via reachability analysis. To patch DNNs against the violated safety properties, the retraining process of Veritex utilizes a loss function to constrain the distance between the exact unsafe output region and the safe domain. Additionally, an extra loss term is combined to minimize the DNN patch and avoid introducing new unexpected behaviors synergistically. In contrast, we assign negative samples with expected labels via behavior imitation, which is more lightweight. Before that, [58] addresses DNN repair problems by introducing a gradient-based and model-agnostic editable training strategy, named editable neural networks (ENNs).
**Fine-tuning without fault localization.** Resorting to DNN verification techniques, [33] utilizes the recent advances to patch a faulty DNN to satisfy specific requirements in a provable minimal manner. Additionally, [59] presents a method to repair feed-forward deep neural networks via reducing the predicate satisfaction formulated from the patch problem into a mixed integer quadratic program (MIQP) to calibrate the weights of a single layer. Furthermore, PRDNN [34], another related work to ours, introduces a novel and equivalent representation of the buggy ReLU DNNs, Decouple DNNs (DDNNs), reducing the DNN repair problem to a linear programming (LP) problem. However, the repair process works aimlessly and modifies the parameters of an arbitrary DNN layer, without fault localization preprocessing. Also, due to the expensive complexity of computing linear polytope regions of DNNs, the dimension of the repaired polytopes is limited, and generally, only one- or two-dimension is feasible. Besides, converting the resultant DDNNs back into standard and equivalent DNNs is largely an open problem.
**Fine-tuning with fault localization.** Closely resembling finding out the erroneous codes in software engineering, fault localization identifies the suspicious (responsible) neuron set responsible for the unexpected DNN behaviors. DeepFault [61] is the first proposed work on fault localization of DNNs, which analyzes pre-trained DNNs against a test set to establish the hit spectrum corresponding to each neuron, and then the suspicious neurons are identified by employing a designed suspiciousness measure. Whereas, instead of DNN repair, this fault localization technique is then utilized to synthesize adversarial inputs. Subsequently, NNrepair [35] explores the activation patterns [62] for the identification of potentially faulty neurons and the repair constraints w.r.t. DNN parameters are solved via concolic execution [63]. Further, Arachen [60] localizes the neural weights which have more impact on negative samples and less impact on positive samples via bidirectional localization, inspired by the fault localization of software engineering. In the following, the selected neuron weights are modified with the differential evolution algorithm. Then it comes to another related work to this paper, CARE [36], which follows the same patch paradigm as Arachen, while, it performs a causality-based fault localization for identifying the responsible neurons and its parameter optimization strategy adopts the particle swarm optimization (PSO)
Fig. 3: Repair performance of BIRDNN on buggy DNNs with different activation functions.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} & ENN [58] & Vetitex [55] & MMDNN [33] & LRNN [59] & PRDNN [34] & NNrepair [35] & Arachen [60] & CARE [36] & **BIRDNN (Ours)** \\ \hline Retraining & ✓ & ✓ & & & & & & & ✓ \\ \hline Fine-tuning & & & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline Fault Localization & & & & & ✓ & ✓ & ✓ & ✓ \\ \hline Provable Repair\({}^{\star}\) & & \(\triangle\) & & & ✓ & & & \(\triangle\) \\ \hline Acturation Compatibility & ✓ & & & & & ✓ & ✓ & ✓ \\ \hline DRPs\({}^{\dagger}\) & & ✓ & & ✓ & ✓ & & ✓ & ✓ \\ \hline \end{tabular}
* Veritex and BIRDNN (retraining) are provable repair on condition that the loss function value reaches 0 and we mark them with \(\triangle\), instead of ✓.
\(\dagger\) The repair scenarios tackled by each work are identified from the shown evaluation tests, and some may work for the untested cases with modifications.
\end{table} TABLE VII: Comparisons among related work and our proposed repair framework.
algorithm. Nevertheless, the fault localization strategy is less efficient due to the computation of the causality operators.
In summary, the comprehensive comparisons among these related work and BIRDNN are demonstrated in Table VII.
## VII Conclusion
This paper makes a unique insight on the behavior differences of DNN neurons and proposes a DNN repair framework BIRDNN based on behavior imitation, which supports alternative retraining based and fine-tuning based DNN patching for the first time. Besides, BIRDNN tackles domain-wise repair problems in a probably approximated correct style by characterizing DNN behaviors over domains. Experiments on the ACAS Xu DNNs illustrate BIRDNN's effectiveness, efficiency and compatibility. Despite the outstanding performance, improving BIRDNN further to be a provable DNN repair framework is an appealing and challenging future work. A deeper investigation into the behavior differences of DNNs may provide specific directions to modify the neurons (adding or subtracting some values). Additionally, utilizing the proposed fault localization strategy to synthesize adversarial samples is another possible future work.
|
2304.13539 | Tensor Decomposition for Model Reduction in Neural Networks: A Review | Modern neural networks have revolutionized the fields of computer vision (CV)
and Natural Language Processing (NLP). They are widely used for solving complex
CV tasks and NLP tasks such as image classification, image generation, and
machine translation. Most state-of-the-art neural networks are
over-parameterized and require a high computational cost. One straightforward
solution is to replace the layers of the networks with their low-rank tensor
approximations using different tensor decomposition methods. This paper reviews
six tensor decomposition methods and illustrates their ability to compress
model parameters of convolutional neural networks (CNNs), recurrent neural
networks (RNNs) and Transformers. The accuracy of some compressed models can be
higher than the original versions. Evaluations indicate that tensor
decompositions can achieve significant reductions in model size, run-time and
energy consumption, and are well suited for implementing neural networks on
edge devices. | Xingyi Liu, Keshab K. Parhi | 2023-04-26T13:12:00Z | http://arxiv.org/abs/2304.13539v1 | # Tensor Decomposition for Model Reduction in Neural Networks: A Review
###### Abstract
Modern neural networks have revolutionized the fields of computer vision (CV) and Natural Language Processing (NLP). They are widely used for solving complex CV tasks and NLP tasks such as image classification, image generation, and machine translation. Most state-of-the-art neural networks are over-parameterized and require a high computational cost. One straightforward solution is to replace the layers of the networks with their low-rank tensor approximations using different tensor decomposition methods. This paper reviews six tensor decomposition methods and illustrates their ability to compress model parameters of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformers. The accuracy of some compressed models can be higher than the original versions. Evaluations indicate that tensor decompositions can achieve significant reductions in model size, run-time and energy consumption, and are well suited for implementing neural networks on edge devices.
Tensor decomposition, convolution neural network acceleration, recurrent neural network acceleration, Transformer acceleration, Canonical Polyadic decomposition, Tucker decomposition, Tensor Train decomposition, Tensor Ring decomposition, Block-Term decomposition, Hierarchical Tucker decomposition, model compression.
## I Introduction
Tensors are multidimensional arrays indexed by three or more indices. An \(N^{th}\)-order tensor is the tensor product of \(N\) vector spaces. Third-order tensors have three indices as shown in Fig. 1. In special cases, first-order tensors represent vectors, and second-order tensors represent matrices.
Convolutional neural networks (CNNs) have outperformed traditional techniques for image recognition tasks. In 2012, AlexNet [1] achieved about \(80\%\) top-5 accuracy on the ImageNet dataset [2]. Furthermore, subsequently VGG [3] and GoogleNet [4] achieved about \(90\%\) top-5 accuracy with the same dataset. On the ImageNet dataset, ResNet [5] with a depth of up to \(152\) layers achieved \(3.57\%\) top-5 error. Executing CNNs for computer vision tasks on mobile devices is gaining more and more attention. Common methods to reduce the size of CNNs include: sparsification [6, 7, 8, 9], quantization [1, 10], structural pruning [11, 12, 13, 14], and low-rank approximation [15, 16, 17, 18, 19, 20, 21, 22].
The use of low-rank approximations is inspired by [23] which showed that the neural network parameters are highly redundant. The authors in this paper could predict more than \(95\%\) of the weights of the network which indicates that the parameters of the CNNs are highly over-parametrized. Various low-rank tensor/matrix decompositions can be directly applied to the weights of convolutional and fully connected layers. In this paper, we review Canonical Polyadic decomposition (CPD) [15, 16, 17], Tucker decomposition [18, 19] and Tensor Train decomposition [20, 21] approaches to reduce model parameters of CNNs. The decomposed layers are replaced by a sequence of new layers with significantly fewer parameters.
Recurrent Neural Networks (RNNs) have shown promising success in sequence modeling tasks due to their ability in capturing temporal dependencies from input sequences [24, 25]. Their advanced variant, the Long Short-Term Memory (LSTM), introduces a number of gates and passes information with element-wise operations to address the gradient vanishing issue in vanilla RNNs [26]. These neural network architectures have shown promising performances in Natural Language Processing (NLP) [27], speech recognition [28] and computer vision (CV) [29]. The reader is referred to [30] for a review of various brain-inspired computing models.
Despite the success, RNNs and LSTMs suffer from a huge number of parameters which make the models notoriously difficult to train and susceptible to overfitting. In order to circumvent this problem, current approaches often involve exploring the low-rank structures of the weight matrices. Inspired by the implementation of tensor decomposition methods in CNNs [20], various tensor decomposition methods have been applied in RNNs and LSTMs, including Tensor Train Decomposition [31], Tensor Ring Decomposition [32], Block-Term Decomposition [33] and Hierarchical Tucker Decomposition [34]. These tensor-decomposed models can maintain high performance with orders-of-magnitude fewer parameters compared to the large-size vanilla RNNs/LSTMs.
Transformer is a deep learning model that is based on
the mechanism of self-attention by weighting the significance of each part of the input data differentially. It has led to breakthroughs in the fields of NLP and CV. Like RNNs, transformers are designed to process sequential input. However, transformers process the entire input all at once. For example, the transformers can process the whole natural language sentence at a time while the RNNs have to process word by word. The training parallelization allows transformers to be trained on larger datasets. This has led to the success of pre-trained systems such as BERT [35], GPT [36] and T5 [37]. However, the large model size of the Transformer based model may cause problems in training and inference under resource-limited environments. Tensorized embedding (TE) utilized the Tensor Train decomposition to compress the embedding layers of Transformers [38]. A novel self-attention model with Block-Term Decomposition was proposed to compress the attention layers of Transformers [39]. This method can not only largely compress the model size but also achieve performance improvement.
## II Tensor Decomposition Methods
A tensor decomposition is any scheme for compressing a tensor into a sequence of other, often simpler tensors. In this section, we review some tensor decomposition methods that are commonly used to compress deep learning models.
### _Truncated Singular Value Decomposition_
Given a matrix \(\mathbf{W}\in\mathbb{R}^{M\times N}\), the singular value decomposition (SVD) of the matrix is defined as:
\[\mathbf{W}=\mathbf{USV^{\intercal}},\]
where \(\mathbf{U}\in\mathbb{R}^{M\times M}\), \(\mathbf{S}\in\mathbb{R}^{M\times N}\) and \(\mathbf{V}\in\mathbb{R}^{N\times N}\). \(\mathbf{S}\) is the diagonal matrix with all the singular values on the diagonal. \(\mathbf{U}\) and \(\mathbf{V}\) are the corresponding orthogonal matrices. If the singular values of \(\mathbf{W}\) decay fast, then the weight matrix (\(\mathbf{W}\)) can be approximated by keeping only the \(K\) largest entries of \(\mathbf{S}\):
\[\widetilde{\mathbf{W}}=\widetilde{\mathbf{U}}\widetilde{\mathbf{S}}\widetilde{\mathbf{V}}^{ \intercal},\]
where \(\widetilde{\mathbf{U}}\in\mathbb{R}^{M\times K}\), \(\widetilde{\mathbf{S}}\in\mathbb{R}^{K\times K}\) and \(\widetilde{\mathbf{V}}\in\mathbb{R}^{K\times N}\). Then for any \(\mathbf{I}\in\mathbb{R}^{T\times M}\), the SVD approximation error satisfies:
\[||\mathbf{I}\widetilde{\mathbf{W}}-\mathbf{I}\mathbf{W}||_{F}\leq s_{K+1}||\mathbf{I}||_{F},\]
where \(||\cdot||_{F}\) denotes its Frobenius norm. Notice that the approximation error \(||\mathbf{I}\widetilde{\mathbf{W}}-\mathbf{I}\mathbf{W}||_{F}\) is controlled by \(s_{K+1}\), the \((K+1)^{th}\) largest singular value, or put another way, the decay along the diagonal of \(\mathbf{S}\). Considering the computation cost, \(\mathbf{I}\widetilde{\mathbf{W}}\) can be computed in \(\mathcal{O}(TMK+TK^{2}+TKN)\) which is much smaller than \(\mathcal{O}(TMN)\) for a sufficiently small \(K\).
### _Tensor Train Decomposition_
As defined in [40], the Tensor Train (TT) Decomposition of a tensor \(\mathcal{A}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) can be replaced by a set of matrices \(\mathbf{G}_{k}[j_{k}]\in\mathbb{R}^{r_{k-1}\times r_{k}}\) where \(j_{k}=1,2,\ldots,n_{k}\), \(k=1,2,\ldots,d\) and \(r_{0}=r_{d}=1\). Then each of the tensor elements can be computed as:
\[\mathcal{A}(j_{1},j_{2},\ldots,j_{d})=\mathbf{G}_{1}[j_{1}]\mathbf{G}_{2}[j_{2}]\ldots \mathbf{G}_{d}[j_{d}].\]
The sequence \(\{r_{k}\}_{k=1}^{d}\) is called TT-rank of the TT-representation of \(\mathcal{A}\). The collections of the matrices \(\{\{\mathbf{G}_{k}[j_{k}]\}_{j_{k}=1}^{n_{k}}\}_{k=1}^{d}\) are defined as TT-cores [40]. Notice that in TT-format, only \(\sum_{k=1}^{d}n_{k}r_{k-1}r_{k}\) parameters are required to represent a tensor \(\mathcal{A}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) which originally has \(\prod_{k=1}^{d}n_{k}\) elements. The trade-off between the model compression ratio and the reconstruction accuracy is controlled by the TT-ranks (\(\{r_{k}\}_{k=1}^{d}\)). The smaller the TT-ranks, the higher the model compression ratio TT-format can achieve. Another advantage of the TT-decomposition is that basic linear algebra operations can be applied to the TT-format tensors efficiently [40]. Fig. 2 from [41] shows that a third tensor \(\mathcal{A}\in\mathbb{R}^{3\times 4\times 5}\) can be represented by three TT-cores (\(\{\mathbf{G}_{d}\}_{d=1}^{3}\)) with \(32\) parameters in TT-format. Thus, the number of parameters needed to represent the original tensor is reduced from \(60\) to \(32\).
Tensor Train Decomposition utilizes two key ideas: recursively applying low-rank SVD and reshaping if the matrix is too thin. As illustrated in Fig. 3, a matrix of size \(8\times 10\) can be decomposed into five matrices \(G_{1}\) through \(G_{5}\) by recursively applying reshaping and low-rank SVD. Then these five matrices can be folded into five TT-cores of sizes \(1\times 2\times r_{1}\), \(r_{1}\times 2\times r_{2}\), \(r_{2}\times 2\times r_{3}\), \(r_{3}\times 2\times r_{4}\) and \(r_{4}\times 5\times 1\), respectively.
### _Canonical Polyadic Decomposition_
In 1927, the idea of expressing a tensor as the sum of a finite number of rank-one tensors was proposed by Hitchock [42, 43]. In 1970, the concept was generalized as CANDECOMP (canonical decomposition) by Carroll and Chang [44] and as PARAFAC (parallel factors) by Harshman [45].
The Canonical Polyadic (CP) decomposition factorizes a tensor into a sum of component rank-one tensors. For example, a third-order tensor \(\mathcal{X}\in\mathbb{R}^{I\times J\times K}\) can be approximated as:
\[\mathcal{X}\approx\sum_{r=1}^{R}\mathbf{a}_{r}\circ\mathbf{b}_{r}\circ\mathbf{c}_{r},\]
where \(\circ\) denotes the outer product of vectors. Parameter \(R\) is a positive integer and \(\mathbf{a}_{r}\in\mathbb{R}^{I}\), \(\mathbf{b}_{r}\in\mathbb{R}^{J}\) and \(\mathbf{c}_{r}\in\mathbb{R}^{K}\) for \(r=1,\ldots,R\). As shown in Fig. 4, each element of tensor \(\mathcal{X}\) can be computed as:
\[\mathbf{x}_{i,j,k}\approx\sum_{r=1}^{R}a_{ir}b_{jr}c_{kr},\]
where \(i=1,\ldots,I\), \(j=1,\ldots,J\), \(k=1,\ldots,K\). The factor matrices are defined as the combination of the vectors from the rank-one components: \(\mathbf{A}=[\mathbf{a}_{1}\quad\mathbf{a}_{2}\quad\ldots\quad\mathbf{a}_{R}]\), \(\mathbf{B}=[\mathbf{b}_{1}\quad\mathbf{b}_{2}\quad\ldots\quad\mathbf{b}_{R}]\) and \(\mathbf{C}=[\mathbf{c}_{1}\quad\mathbf{c}_{2}\quad\ldots\quad\mathbf{c}_{R}]\).
The rank of a tensor \(\mathcal{X}\) is defined as the smallest number of rank-one tensors that can generate the original tensor as their sum. Determining the rank of a specifically given tensor is NP-hard [46]. There is no straightforward algorithm to solve this problem. For a general third-order tensor \(\mathcal{X}\in\mathbb{R}^{I\times J\times K}\), the weak upper bound on its largest attainable rank is given by:
\[rank(\mathcal{X})\leq min\{IJ,IK,JK\}.\]
Any rank that occurs with positive probability is called a typical rank. Table I from [47] shows known typical ranks of specific third-order tensors over \(\mathbb{R}\).
Given the rank \(R\), there are many algorithms to compute the CP decomposition. Denton _et al._ computed the CP decomposition by the alternating least squares (ALS) method [16]. The
Fig. 4: CP decomposition of a third-order tensor.
Fig. 3: Tensor Train Decomposition of a matrix of size \(8\times 10\) by recursively applying reshaping and low-rank SVD. After decomposition and reshaping, the dimensions of the five TT-cores are \(1\times 2\times r_{1}\), \(r_{1}\times 2\times r_{2}\), \(r_{2}\times 2\times r_{3}\), \(r_{3}\times 2\times r_{4}\) and \(r_{4}\times 5\times 1\), respectively.
Fig. 2: Tensor Train Decomposition of a third-order tensor [41]. The original tensor has the dimensions \(3\times 4\times 5\). After decomposition, the dimensions of the three TT-cores are \(1\times 3\times 2\), \(2\times 4\times 2\) and \(2\times 5\times 1\), respectively. The number of parameters needed to represent the original tensor is reduced from \(60\) to \(32\).
ALS method was proposed in the original papers by Carroll and Chang [44] and Harshman [45].
```
procedure CP-ALS(\(\mathcal{X},R\)) initialize \(\mathbf{A}^{(n)}\in\mathbb{R}^{I_{n}\times R}\) for \(n=1,\ldots,N\) repeat for\(n=1,\ldots,N\)do \(\mathbf{V}\longleftarrow\mathbf{A}^{(1)\intercal}\mathbf{A}^{(1)}*\cdots*\mathbf{A}^{(n-1) \intercal}\mathbf{A}^{(n-1)}*\mathbf{A}^{(n+1)\intercal}\mathbf{A}^{(n+1)}*\cdots*\mathbf{A}^ {(N)\intercal}\mathbf{A}^{(N)}\); \(\mathbf{A}^{(n)}\longleftarrow\mathbf{X}^{(n)}(\mathbf{A}^{(N)}\odot\cdots\odot\mathbf{A}^ {(n+1)}\odot\mathbf{A}^{(n-1)}\odot\cdots\odot\mathbf{A}^{(1)})\mathbf{V}^{\uparrow}\); normalize columns of \(\mathbf{A}^{(n)}\) (storing norms as \(\lambda\)); endfor until fit ceases to improve or maximum iterations exhausted return \(\lambda,\mathbf{A}^{(1)},\mathbf{A}^{(2)},\ldots,\mathbf{A}^{(N)}\) endprocedure
```
**Algorithm 1** ALS Algorithm [44, 45].
Given the rank \(R\), the ALS procedure for an \(N\)-th order tensor is shown in Algorithm 1 where \(*\) and \(\odot\) stand for the Hadamard and Khatri-Rao products of matrices, respectively. The factor matrices can be initialized randomly. The pseudo-inverse of a matrix \(\mathbf{V}\) of size \(R\times R\) must be computed at each iteration. The iterations stop when some stop conditions are satisfied, e.g., meeting the maximum iterations or little or no change in the factor matrices. The drawback of this algorithm is that subtracting the best rank-one tensor may increase tensor rank [54]. In [17], the authors used the non-linear least squares (NLS) method. Given the rank \(R\), NLS minimizes the \(L^{2}\)-norm of the approximation error by using Gauss-Newton optimization. NLS decomposition has significantly higher accuracy than the ALS with or without fine-tuning [17]. The Krylov-Levenberg-Marquardt algorithm was used for the CP decomposition with bounded sensitivity constraint in [55]. Furthermore, the authors in [56] propose a variant of the Error Preserving Correction (EPC) method that minimizes the sensitivity of the decomposition:
\[\min_{(\mathbf{A},\mathbf{B},\mathbf{C})}ss([[\mathbf{A},\mathbf{B},\mathbf{C}]])\] s,t \[\left|\mathcal{K}-[[\mathbf{A},\mathbf{B},\mathbf{C}]]\right|\right|_{F}^{2} \leq\delta^{2}\]
where \(ss([[\mathbf{A},\mathbf{B},\mathbf{C}]])\) represents the sensitivity of the CP decomposition and can be computed by [56]:
\[ss([[\mathbf{A},\mathbf{B},\mathbf{C}]])=K\cdot\mathrm{tr}\left\{(\mathbf{A}^{ \intercal}\mathbf{A})\vartriangle(\mathbf{B}^{\intercal}\mathbf{B})\right\}+\] \[I\cdot\mathrm{tr}\left\{(\mathbf{B}^{\intercal}\mathbf{B})\vartriangle( \mathbf{C}^{\intercal}\mathbf{C})\right\}+J\cdot\mathrm{tr}\left\{(\mathbf{A}^{\intercal} \mathbf{A})\vartriangle(\mathbf{C}^{\intercal}\mathbf{C})\right\},\]
where \(\vartriangle\) denotes the Hadamard element-wise product. The boundary \(\delta^{2}\) can be treated as the approximation error of the CP decomposition. See [56] for further details.
### _Tucker Decomposition_
Tucker decomposition was first proposed by Tucker [57, 58]. The Tucker decomposition can be treated as a form of higher-order principal component analysis (PCA). A third-order tensor \(\mathcal{X}\in\mathbb{R}^{I\times J\times K}\) can be decomposed into a core tensor multiplied by a matrix along each mode:
\[\mathcal{X} \approx\mathcal{G}\bullet_{1}\mathbf{A}\bullet_{2}\mathbf{B}\bullet_{3 }\mathbf{C}\] \[=\sum_{p=1}^{P}\sum_{q=1}^{Q}\sum_{r=1}^{R}g_{pqr}\mathbf{a}_{p}\circ \mathbf{b}_{q}\circ\mathbf{c}_{r}\] \[=[[\mathcal{G};\mathbf{A},\mathbf{B},\mathbf{C}]],\]
where \(\mathbf{A}\in\mathbb{R}^{I\times P}\), \(\mathbf{B}\in\mathbb{R}^{J\times Q}\) and \(\mathbf{C}\in\mathbb{R}^{K\times R}\) are the factor matrices and can be treated as the principal components in each mode. The tensor \(\mathcal{G}\in\mathbb{R}^{P\times Q\times R}\) is the core tensor. In the Tucker decomposition, each element of the original tensor \(\mathcal{X}\) can be represented by:
\[x_{i,j,k}\approx\sum_{p=1}^{P}\sum_{q=1}^{Q}\sum_{r=1}^{R}g_{pqr}a_{ip}b_{jq}c _{kr}, \tag{1}\]
for \(i=1,\ldots,I\), \(j=1,\ldots,J\) and \(k=1,\ldots,K\). The parameters \(P\), \(Q\), and \(R\) represent the number of components in the factor matrices \(\mathbf{A}\), \(\mathbf{B}\) and \(\mathbf{C}\), respectively. Fig. 5 shows the Tucker decomposition of a third-order tensor. In fact, CP decomposition can be viewed as a special case of the Tucker decomposition where the core tensor is superdiagonal and \(P\), \(Q\) and \(R\) are identical.
### _Tensor Ring Decomposition_
The limitation of setting the ranks hampers the flexibility and representation capability of the Tensor Train based models. When multiplying TT-cores, a strict order must be followed and the alignment of the tensor dimensions is critical in obtaining the optimized TT-cores. However, locating the best alignment is a hard problem [59].
Fig. 5: Tucker decomposition of a third-order tensor \(\mathcal{X}\).
The main modification of the tensor ring decomposition (TRD) is connecting the first and the last core tensors circularly to build a ring-like structure [32]. It can alleviate the limitation of the tensor train models as mentioned above. Formally, a \(d\)-dimensional target tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{d}}\) can be decomposed as [32]:
\[\mathcal{X}\stackrel{{\rm TRD}}{{=}}\sum_{r_{0}=r_{d},r_{2}, \cdots,r_{d-1}}\mathcal{G}^{(1)}_{r_{0},I_{1},r_{1}}\mathcal{G}^{(2)}_{r_{1},I _{2},r_{2}}\cdots\mathcal{G}^{(d)}_{r_{d-1},I_{d},r_{d}}\]
The first and the last ranks are set to \(R\) (\(R>1\)). For any index \(k\) (\(k\in{1,2,\cdots,R}\)), the first order of the first core tensor \(\mathcal{G}^{(1)}_{r_{0}=k}\) and the last order of the last core tensor \(\mathcal{G}^{(d)}_{r_{d}=k}\) are matrices. The tensor ring structure can be regarded as a summation of \(R\) of tensor trains along each of the \(R\) slices of tensor cores. The product \(\mathcal{G}^{(1)}_{k,I_{1}}\mathcal{G}^{(2)}_{I_{2}}\cdots\mathcal{G}^{(d)}_{ I_{d},k}\) has the form of tensor train decomposition by fixing \(r_{0}=r_{d}=k\). Therefore, a tensor ring model is a linear combination of \(R\) different tensor train models. The tensor ring structure is illustrated in Fig. 6.
### _Block Term Decomposition_
Block Term decomposition combines CP decomposition and Tucker decomposition [33]. Given a \(d\)-dimensional target tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{d}}\), it can be decomposed into \(N\) block terms as follows [33]:
\[\mathcal{X}\stackrel{{\rm BTD}}{{=}}\sum_{n=1}^{N}\mathcal{G}_{n }\bullet_{1}\mathcal{A}^{(1)}_{n}\bullet_{2}\mathcal{A}^{(2)}_{n}\bullet_{3} \cdots\bullet_{d}\mathcal{A}^{(d)}_{n}\]
where each term computes tensor-tensor product on \(k^{th}\) order (\(\bullet_{k}\)) between a core tensor \(\mathcal{G}_{n}\in\mathbb{R}^{R_{1}\times\cdots\times R_{d}}\) and \(d\) factor matrices \(\mathcal{A}^{(k)}_{n}\in\mathbb{R}^{I_{k}\times R_{h}}\) of \(\mathcal{G}_{n}\)'s \(k^{th}\) dimension, where \(n\in[1,N]\) and \(k\in[1,d]\)[60]. \(N\) is the CP-rank. \(R_{1},R_{2},\cdots,R_{d}\) are the Tucker-rank where \(d\) is the Core-order. Fig. 7 shows the block term decomposition for a \(3^{rd}\)-order tensor.
### _Hierarchical Tucker Decomposition_
Hierarchical Tucker decomposition has multiple hierarchical levels based on the order of the tensor. From top to bottom in a binary tree, a hierarchical tucker decomposed tensor can be recursively decomposed into intermediate tensors, referred as _frames_, as shown in Fig. 8. Each frame is a unique _node_ where each node is associated with a _dimension set_. Given a \(d\)-dimensional target tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}\times I_{2}\times\cdots\times I_{d}}\), a binary tree with a root node associated with \(D=\{1,2,\cdots,d\}\) can be built where \(\mathcal{X}=\mathcal{U}_{D}\) is the root frame. For each non-leaf frame \(\mathcal{U}_{s}\in\mathbb{R}^{r_{s}\times I_{\mu_{s}}\times\cdots\times I_{ \nu_{s}}}\), \(s\subsetneq D\) is associated with the node corresponding to \(\mathcal{U}_{s}\) and \(s_{1},s_{2}\subsetneq s\) are associated with the left and right child nodes of the \(s\)-associated node where \(\mu_{s}=min(s)\) and \(\nu_{s}=max(s)\). A non-leaf frame \(\mathcal{U}_{s}\) can be recursively decomposed to a left child frame (\(\mathcal{U}_{s_{1}}\)), a right child frame (\(\mathcal{U}_{s_{2}}\)) and a transfer tensor ((\(\mathcal{G}\in\mathbb{R}^{r_{s}\times r_{s_{1}}\times r_{s_{2}}}\))) as follows [34]:
\[\mathcal{U}_{s}=\mathcal{G}_{s}\times_{1}^{2}\mathcal{U}_{s_{1}}\times_{1}^{2} \mathcal{U}_{s_{2}},\]
where \(\times_{1}^{2}\) denotes the tensor contraction that can be executed between two tensors with at least one matched dimension. For example, given two tensors \(\mathcal{A}\in\mathbb{R}^{n_{1}\times n_{2}\times l}\) and \(\mathcal{B}\in\mathbb{R}^{l\times m_{1}\times m_{2}}\) where the third dimension of \(\mathcal{A}\) matches the first dimension of \(\mathcal{B}\), a tensor of size \(n_{1}\times n_{2}\times m_{1}\times m_{2}\) can be computed by using the tensor contraction operation as \((\mathcal{A}\times_{1}^{3}\mathcal{B})_{i_{1},i_{2},j_{1},j_{2}}=\sum_{ \alpha=1}^{l}\mathcal{A}_{i_{1},i_{2},\alpha}\mathcal{B}_{\alpha,j_{1},j_{2}}\). The original \(I_{1}\times\cdots\times I_{d}\)-order tensor \(\mathcal{X}=\mathcal{U}_{D}\) can be recursively decomposed into a combination of the \(2^{nd}\)-order leaf frames and the \(3^{rd}\)-order transfer tensors by performing the hierarchical tucker decomposition from top to bottom of the binary tree. The parameter \(r_{s}\) is defined as _hierarchical rank_.
## III Tensorizing Convolutional Neural Networks
CNNs have two main parts: convolutional layers and fully connected layers. In general, convolutional layers in CNNs map a third-order input tensor \(\mathcal{X}\) of size \(S\times W\times H\) into a
Fig. 8: Hierarchical Tucker decomposition for a \(4^{th}\)-order tensor. It is a binary tree with root \(D=\{1,2,3,4\}\) where the dashed boxes represent the nodes. Node \(\{1\}\) is a leaf node whose parent and sibling are node \(\{1,2\}\) and node \(\{2\}\), respectively [34].
Fig. 6: Tensor Ring Decomposition in a ring form: the core tensors are multiplied one by one and form a ring [32].
Fig. 7: Block Term decomposition for a \(3^{rd}\)-order tensor. The tensor can be approximated by \(N\) Tucker decompositions where \(N\) is the CP-rank; \(R_{1},R_{2},R_{3}\) are the Tucker-rank; \(d\) is the Core-order [33].
third-order output tensor \(\mathcal{Y}\) of size \(T\times W^{\prime}\times H^{\prime}\) with a \(4^{th}\)-order kernel tensor \(\mathcal{K}\) of size \(T\times S\times D\times D\), where \(D\times D\) represents the filter size, \(S\) and \(T\) represent the number of input and output channels, respectively. The typical convolutional filter sizes are small, e.g., \(3\times 3\), \(7\times 7\), compared to the numbers of input (\(S\)) and output (\(T\)) channels. Each convolutional layer may have hundreds or thousands of filters which are suited for tensor decomposition methods. As shown in Fig. 9, the output tensor \(\mathcal{Y}\) can be computed as:
\[\mathcal{Y}_{t,w^{\prime},h^{\prime}}=\sum_{s=1}^{S}\sum_{j=1}^{D}\sum_{i=1}^ {D}\mathcal{K}_{t,s,j,i}\mathcal{X}_{s,w_{j},h_{i}},\]
where \(w_{j}=(w^{\prime}-1)q+j-p\) and \(h_{i}=(h^{\prime}-1)q+i-p\). The parameters \(p\) and \(q\) represent zero-padding size and stride, respectively.
Fully connected layers apply a linear transformation to an \(N\)-dimensional input vector \(\mathbf{x}\) and compute a \(M\)-dimensional output vector \(\mathbf{y}\):
\[\mathbf{y}=\mathbf{W}\mathbf{x}+\mathbf{b}. \tag{2}\]
where \(\mathbf{W}\) and \(\mathbf{b}\) represent the weight matrix and the bias vector, respectively. In this section, we show how these different tensor decomposition methods can be applied to CNNs.
### _Tensor Train Decomposition_
Tensor Train Decomposition can be applied to both the convolutional layers and the fully connected layers of CNNs. Let's first describe the implementation of TT-decomposition on the fully connected layers.
Now consider the TT-representation of the weight matrix \(\mathbf{W}\in\mathbb{R}^{M\times N}\) where \(M=\prod_{k=1}^{d}m_{k}\) and \(N=\prod_{k=1}^{d}n_{k}\). Define two bijection mappings \(\mu(t)=(\mu_{1}(t),\mu_{2}(t),\ldots,\mu_{d}(t))\) and \(\nu(t)=(\nu_{1}(t),\nu_{2}(t),\ldots,\nu_{d}(t))\) that map the matrix \(\mathbf{W}\) of size \(M\times N\) to a higher-order tensor \(\mathcal{W}\) of size \(n_{1}m_{1}\times n_{2}m_{2}\times\ldots n_{d}m_{d}\). The mapping \(\mu(\cdot)\) maps the row index \(\ell=1,2,\ldots,M\) of the matrix \(\mathbf{W}\) into a \(d\)-dimensional vector-indices whose \(k\)-th dimensions are of length \(m_{k}\). The mapping \(\nu(\cdot)\) maps the column index \(t=1,2,\ldots,N\) of the matrix \(\mathbf{W}\) into \(d\)-dimensional vector-indices whose \(k\)-th dimensions are of length \(n_{k}\). Thus the \(k\)-th dimension of the reshaped \(d\)-dimensional tensor \(\mathcal{W}\) is of length \(n_{k}m_{k}\) and is indexed by the tuple \((\mu_{k}(\cdot),\nu_{k}(\cdot))\). Then the tensor \(\mathcal{W}\) can be converted using TT-decomposition:
\[\mathbf{W}(\ell,t) =\mathcal{W}((\mu_{1}(\ell),\nu_{1}(t)),\ldots,(\mu_{d}(\ell), \nu_{d}(t)))\] \[=\mathbf{G}_{1}[(\mu_{1}(\ell),\nu_{1}(t))]\ldots\mathbf{G}_{d}[(\mu_{d}( \ell),\nu_{d}(t))].\]
The TT-format of the weight matrix transforms a \(d\)-dimensional tensor \(\mathcal{X}\) (formed from the input vector \(\mathbf{x}\)) to the \(d\)-dimensional tensor \(\mathcal{Y}\) (which can be used to compute output vector \(\mathbf{y}\)). As illustrated in Fig. 10 from [41], the linear transformation of a fully connected layer can be computed in the TT-format:
\[\mathcal{Y}(i_{1},\ldots,i_{d}) =\sum_{j_{1},\ldots,j_{d}}\mathbf{G}_{1}[i_{1},j_{1}]\ldots\mathbf{G}_{d}[ i_{d},j_{d}]\mathcal{X}(j_{1},\ldots,j_{d})\] \[\quad+\mathcal{B}(i_{1},\ldots,i_{d}).\]
where \(\mathcal{B}\) corresponds to the bias vector \(\mathbf{b}\) in Eq. (2). The ranks of the TT-format for the weight matrix depend on the choice of the bijection mappings \(\mu(t)\) and \(\nu(t)\). The computational complexity of the forward pass is \(\mathcal{O}(dr^{2}m\text{max}\{M,N\})\).
In machine learning, backpropagation is widely used to train feed-forward neural networks based on the stochastic gradient descent algorithm [24]. Backpropagation computes the gradient of the loss-function \(L\) with respect to all the parameters. Given the gradients with respect to the layer's output \(\frac{\partial L}{\partial\mathbf{y}}\), the backpropagation applied to the fully connected layers computes the gradients with respect to the input \(\mathbf{x}\), the weight matrix \(\mathbf{W}\) and bias vector \(\mathbf{b}\):
\[\frac{\partial L}{\partial\mathbf{x}}=\mathbf{W}^{\intercal}\frac{\partial L}{\partial \mathbf{y}},\frac{\partial L}{\partial\mathbf{W}}=\frac{\partial L}{\partial\mathbf{y}}\mathbf{ x}^{\intercal},\frac{\partial L}{\partial\mathbf{b}}=\frac{\partial L}{\partial\mathbf{y}}.\]
Notice that the gradient of the loss function with respect to the bias vector is the same as that with respect to the output. The gradient of the loss function with respect to the input vector \(\mathbf{x}\) can be computed using the same matrix-by-vector product as shown in Fig. 10 with the complexity of \(\mathcal{O}(dr^{2}m\text{max}\{M,N\})\). Instead of directly computing \(\frac{\partial L}{\partial\mathbf{W}}\) which requires \(\mathcal{O}(MN)\) memory, it's better to compute the gradient of the loss function \(L\) with respect to each TT-cores. The overall computation complexity of the backpropagation in the fully connected layer is \(\mathcal{O}(d^{2}r^{4}m\text{max}\{M,N\})\)[20]. See [20] for a detailed description of all the learning processes in the fully connected layers in TT-format.
We now review the implementation of TT-decomposition on convolutional layers. One straightforward way to represent the convolutional kernel \(\mathcal{K}\) in the TT-format is to apply the TT-decomposition directly to the tensor \(\mathcal{K}\). But in [21], the disadvantage of this method was mentioned. The kernel of a \(1\times 1\) convolution is a 2-dimensional array whose TT-format coincides with the matrix low-rank format. However, the matrix TT-format is more efficient than the matrix low-rank format [20].
The second approach of applying TT-decomposition to the convolutional layers is inspired by the theory that a convolutional layer can be formulated as a matrix-by-matrix multiplication [21, 61, 62]. As illustrated in Fig. 11, the two-dimensional convolution between a three-dimensional input tensor and a four-dimensional model is equivalent to the matrix-by-matrix multiplication [21].
Fig. 9: Original convolution layer with filter size of \(T\times S\times D\times D\).
In Fig. 11, \(H^{\prime}=H-D+1\) and \(W^{\prime}=W-D+1\), and the output tensor \(\mathcal{Y}\in\mathbb{R}^{T\times W^{\prime}\times H^{\prime}}\) is reshaped into a matrix \(\mathbf{Y}\in\mathbb{R}^{T\times W^{\prime}H^{\prime}}\) as follows:
\[\mathcal{Y}(t,y,x)=\mathbf{Y}(t,y+W^{\prime}(x-1)).\]
The matrix \(\mathbf{X}\) of size \(W^{\prime}H^{\prime}\times D^{2}S\) whose \(k\)-th row represents the \(S\times D\times D\) patch of the input tensor is generated by:
\[\mathcal{X}(s,y+j-1,x+i-1)=\mathbf{X}(x+W^{\prime}(y-1),j+D(i-1)+D^{2}(S-1)),\]
where \(y=1,\ldots,H^{\prime}\), \(x=1,\ldots,W^{\prime}\), \(i,j=1,\ldots,D\). Finally, the kernel tensor \(\mathcal{K}\) can be reshaped into a matrix \(\mathbf{K}\) of size \(D^{2}S\times T\):
\[\mathcal{K}(t,s,j,i)=\mathbf{K}(t,j+D(i-1)+D^{2}(S-1)).\]
After all these reshapings, the convolutional layers can be written as matrix-by-vector multiplication as shown in Fig. 11. Then the matrix TT-format can be directly applied to the matrix \(\mathbf{K}\): reshape it into a tensor \(\mathcal{K}\) and then convert it into the TT-format. Assume that the dimension of the matrix \(\mathbf{K}\) can be factorized as: \(S=\prod_{i=1}^{d}S_{i}\) and \(T=\prod_{i=1}^{d}T_{i}\). This assumption is always true since we can add some dummy channels to increase the values of \(S\) and \(T\). Then the (\(d+1\))-dimensional convolutional kernel whose \(k\)-th dimension has length \(D^{2}\) for \(k=0\) and \(S_{k}T_{k}\) for \(k=1,\ldots,d\) can be defined as:
\[\mathbf{K}(t^{\prime},x+D(y-1)+D^{2}(s^{\prime}-1))\] \[=\mathcal{K}((1,x+D(y-1)),(t_{1},s_{1}),\ldots,(t_{d},s_{d}))\] \[=\mathbf{G}_{0}[1,x+D(y-1)]\mathbf{G}_{1}[t_{1},s_{1}]\ldots\mathbf{G}_{d}[t_ {d},s_{d}],\]
where \(s^{\prime}=s_{1}+\sum_{i=2}^{d}(s_{i}-1)\prod_{j=1}^{i-1}S_{j}\) and \(t^{\prime}=t_{1}+\sum_{i=2}^{d}(t_{i}-1)\prod_{j=1}^{i-1}T_{j}\).
First, the TT-convolution layer reshapes the input tensor into a (\(d+2\))-dimensional tensor \(\mathcal{X}\in\mathbb{R}^{S_{1}\times\cdots\times S_{d}\times W\times H}\) and then transforms this tensor into the output tensor \(\mathcal{Y}\in\mathbb{R}^{T_{1}\times\cdots\times T_{d}\times(W-D+1)\times(H- D+1)}\) using the following equation:
\[\mathcal{Y}(t_{1},\ldots,t_{d},y,x)\] \[= \sum_{i=1}^{D}\sum_{j=1}^{D}\sum_{s_{1},\ldots,s_{d}}\mathcal{X} (s_{1},\ldots,s_{d},j+y-1,i+x-1)\times\] \[\mathbf{G}_{0}[1,y+D(x-1)]\mathbf{G}_{1}[t_{1},s_{1}]\ldots\mathbf{G}_{d}[t_ {d},s_{d}].\]
While training the network, the stochastic gradient is applied to each element of the TT-cores with momentum [21].
### _Canonical Polyadic Decomposition_
In CNNs, convolution layers map an input tensor of size \(S\ \times W\times H\) into an output tensor of size \(T\times W^{\prime}\times H^{\prime}\) using a kernel tensor of size \(T\times S\times D\times D\) where \(S\) and \(T\) represent different input and output channels, respectively. Now consider how to approximate the kernel tensor \(\mathcal{K}\) with
Fig. 11: Reducing convolution to a matrix-by-matrix multiplication [21].
Fig. 10: TT-format fully connected layer [41].
rank-\(R\) CP-decomposition. The number of the parameters needed to represent the kernel tensor of size \(T\times S\times D\times D\) after the decomposition is \(R(D^{2}+T+S)\) since the \(4\)-dimensional kernels are reshaped to \(3\)-dimensional kernels of size \(T\times S\times D^{2}\) as the filter size \(D\) is relatively small (e.g., \(3\times 3\) or \(5\times 5\)). The rank-\(R\) CP format of the reshaped kernel tensor can be represented as [15]:
\[\mathcal{K}_{t,s,j,i}=\sum_{r=1}^{R}\mathbf{U}_{r,s}^{(1)}\mathcal{U}_{r,j,i}^{(2) }\mathbf{U}_{t,r}^{(3)},\]
where \(\mathbf{U}_{r,s}^{(1)}\), \(\mathcal{U}_{r,j,i}^{(2)}\), \(\mathbf{U}_{t,r}^{(3)}\) are the three tensors of sizes \(R\times S\), \(R\times D\times D\) and \(T\times R\), respectively. Then the approximate transformation of the convolution from the input tensor \(\mathcal{X}\) to the output tensor \(\mathcal{Y}\) is given by:
\[\mathcal{Y}_{t,w^{\prime},h^{\prime}}=\sum_{r=1}^{R}\mathbf{U}_{t,r}^{(3)}(\sum_{ j=1}^{D}\sum_{i=1}^{D}\mathcal{U}_{r,j,i}^{(2)}(\sum_{s=1}^{S}\mathbf{U}_{r,s}^{(1)} \mathcal{X}_{s,w_{j},h_{i}})).\]
As shown in Fig. 12 from [15], this transformation is equivalent to a sequence of three separate small convolutional kernels in a row:
\[\mathcal{Z}_{r,w,h} =\sum_{s=1}^{S}\mathbf{U}_{r,s}^{(1)}\mathcal{X}_{s,w,h},\] \[\mathcal{Z^{\prime}}_{r,w^{\prime},h^{\prime}} =\sum_{j=1}^{D}\sum_{i=1}^{D}\mathcal{U}_{r,j,i}^{(2)}\mathcal{Z}_ {t,w_{j},h_{i}},\] \[\mathcal{Y}_{t,w^{\prime},h^{\prime}} =\sum_{r=1}^{R}\mathbf{U}_{t,r}^{(3)}\mathcal{Z^{\prime}}_{r,w^{\prime },h^{\prime}}.\]
where \(\mathcal{Z}_{r,w,h}\) and \(\mathcal{Z^{\prime}}_{r,w^{\prime},h^{\prime}}\) are intermediate feature tensors of sizes \(R\times W\times H\) and \(R\times W^{\prime}\times H^{\prime}\), respectively.
After this replacement, the entire network can be trained using a standard backpropagation process. Ranks play an important role in CP decomposition. Unfortunately, no polynomial time algorithm for determining the rank of a tensor exists [46, 48]. Therefore, most algorithms approximate the tensor with different ranks until a "good" rank is found. In [15], the authors applied rank-\(5\) CP decomposition first and then fine-tuned the whole network to balance the accuracy loss and the number of ranks. In [56], authors applied a heuristic binary search to find the smallest rank such that the accuracy drop after every single layer is acceptable.
### _Tucker Decomposition_
The \(4^{th}\)-order kernel tensor \(\mathcal{K}\) can be decomposed by the rank-(\(R_{1}\), \(R_{2}\), \(R_{3}\), \(R_{4}\)) Tucker decomposition as:
\[\mathcal{K}_{t,s,j,i}=\sum_{r=1}^{R_{1}}\sum_{r=1}^{R_{2}}\sum_{r =1}^{R_{3}}\sum_{r=1}^{R_{3}}\sum_{r=1}^{R_{4}}\mathcal{Q^{\prime}}_{r,r_{3},r,r_{1}}\mathbf{U}_{r,i}^{(1)}\mathbf{U}_{r,i}^{(2)}\mathbf{U}_{r,i}^{(3)}\mathbf{U}_{r,i}^{(4)},\]
where \(\mathcal{G^{\prime}}\) is the core tensor of size \(R_{4}\times R_{3}\times R_{2}\times R_{1}\) and \(\mathbf{U}^{(1)}\), \(\mathbf{U}^{(2)}\), \(\mathbf{U}^{(3)}\) and \(\mathbf{U}^{(4)}\) are the factor matrices of sizes sizes \(R_{1}\times D\), \(R_{2}\times D\), \(R_{3}\times S\) and \(R_{4}\times T\), respectively.
As mentioned before, the filter size \(D\) is relatively small compared to the number of input and output channels. Mode-1 and mode-2 which are associated with the filter sizes don't need to be decomposed. Under this condition, the kernel tensor can be decomposed by the Tucker-2 decomposition as follows:
\[\mathcal{K}_{t,s,j,i}=\sum_{r_{3}=1}^{R_{3}}\sum_{r_{4}=1}^{R_{4}}\mathcal{C}_ {r_{4},r_{3},j,i}\mathbf{U}_{r_{3},s}^{(3)}\mathbf{U}_{r_{4},t}^{(4)},\]
where \(\mathcal{C}\) represents the core tensor of size \(R_{4}\times R_{3}\times D\times D\). As shown in Fig. 13 from [19], this transformation is equivalent to a sequence of three separate small convolutional kernels:
\[\mathcal{Z}_{r_{3},w,h} =\sum_{s=1}^{S}\mathbf{U}_{r_{3},s}^{(3)}\mathcal{X}_{s,w,h},\] \[\mathcal{Z^{\prime}}_{r_{4},w^{\prime},h^{\prime}} =\sum_{r_{3}=1}^{R_{3}}\sum_{j=1}^{D}\sum_{i=1}^{D}\mathcal{C}_{r_ {4},r_{3},j,i}\mathcal{Z}_{r_{3},w_{j},h_{i}},\] \[\mathcal{Y}_{t,w^{\prime},h^{\prime}} =\sum_{r_{4}=1}^{R_{4}}\mathbf{U}_{t,r_{4}}^{(4)}\mathcal{Z^{\prime}} _{r_{4},w^{\prime},h^{\prime}},\]
where \(\mathcal{Z}\) and \(\mathcal{Z^{\prime}}\) are the intermediate tensors of sizes \(R_{3}\times W\times H\) and \(R_{4}\times W^{\prime}\times H^{\prime}\), respectively.
The rank-(\(R_{3},R_{4}\)) parameters are very important in the Tucker decomposition. They control the balance between the model compression ratio and accuracy loss. In [19], the authors applied global analytic solutions for variational Bayesian matrix factorization (VBMF) from [63] to determine the ranks of the Tucker decomposition. VBMF can be used to find noise variance, ranks and theoretical conditions for perfect rank recovery [63].
## IV Tensorizing Recurrent Neural Networks
In this section, implementations of applying different tensor decomposition methods to RNN models are described. This section demonstrates the core steps of four tensor-decomposed
Fig. 12: CP decomposed convolution layer in [15].
Fig. 13: Tucker-2 decomposed convolution layer in [19].
RNN models. 1) Transform \(W\) and \(x\) into tensor representations, \(\mathcal{W}\) and \(\mathcal{X}\); 2) Decompose \(\mathcal{W}\) into several low-rank tensors using different tensor decomposition methods; 3) the original product \(W\cdot x\) is approximated by the tensor computation between decomposed weight tensor \(\mathcal{W}\) and the input tensor \(\mathcal{X}\).
### _Tensorizing \(W\), \(x\) and \(y\)_
Considering \(W\) is a 2-D matrix, it should be reshaped since tensor decomposition methods are mainly applied to high-order tensors. The input vector \(x\in\mathbb{R}^{N}\), output vector \(y\in\mathbb{R}^{M}\) and the weight matrix \(W\in\mathbb{R}^{M\times N}\) should be tensorized into tensors \(\mathcal{X}\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\), \(\mathcal{Y}\in\mathbb{R}^{m_{1}\times m_{2}\times\cdots\times m_{d}}\) and \(\mathcal{W}\in\mathbb{R}^{m_{1}\times m_{2}\times\cdots\times n_{d}\times n_{ 2}\times\cdots\times n_{d}}\), respectively, where \(M=\prod_{i=1}^{d}m_{i}\) and \(N=\prod_{j=1}^{d}n_{j}\).
### _Decomposing \(W\)_
In general, the key idea of building tensor-decomposed RNN is to transform the input-to-hidden weight matrices in different tensor decomposition forms.
**Tensor Train Decomposition** Following the Tensor Train Decomposition method, \(\mathcal{W}\) can be decomposed into [31]:
\[TTD(\mathcal{W})=\mathcal{G}_{1}\mathcal{G}_{2}\cdots\mathcal{G}_{d}.\]
Now instead of storing the full tensor \(\mathcal{W}\) of size \(N\cdot M\), the set of low-rank core tensors \(\{\mathcal{G}\}_{k=1}^{d}\) of size \(\sum_{k=1}^{d}n_{k}\cdot m_{k}\cdot r_{k-1}\cdot r_{k}\) are stored.
**Tensor Ring Decomposition** Following the Tensor Ring Decomposition method, \(\mathcal{W}\) can be decomposed into [31]:
\[TRD(\mathcal{W})=\sum_{r_{0},\cdots,r_{2d-1}} \mathcal{G}_{r_{0},n_{1},r_{1}}^{(1)}\cdots\mathcal{G}_{r_{d-1},n _{d},r_{d}}^{(d)}.\] \[\mathcal{G}_{r_{d},m_{0},r_{d+1}}^{(d+1)}\cdots\mathcal{G}_{r_{2 d-1},m_{d},r_{0}}^{(2d)}\]
For a \(d\)-order input and \(d\)-order output, the weight tensor is decomposed into \(2d\) core tensors multiplied one by one where each corresponding to an input or an output dimension.
**Block Term Decomposition** Following the Block Term Decomposition method, \(\mathcal{W}\) can be decomposed into [33]:
\[BTD(\mathcal{W})=\sum_{n=1}^{N}\mathcal{G}_{n}\bullet_{1}\mathcal{A}_{n}^{(1)} \bullet_{2}\cdots\bullet_{d}\mathcal{A}_{d}^{(d)},\]
where \(\mathcal{G}\in\mathbb{R}^{R_{1}\times R_{2}\times\cdots\times R_{d}}\) is the core tensor, \(\mathcal{A}_{n}^{(d)}\in\mathbb{R}^{N_{d}\times M_{d}\times R_{d}}\) is the factor tensor, \(N\) denotes the CP-rank and \(d\) denotes the Core-order.
**Hierarchical Tucker Decomposition** Following the Hierarchical Tucker Decomposition method, \(\mathcal{W}\) can be decomposed into [34]:
\[HTD(\mathcal{W})= \sum_{k=1}^{r_{D}}\sum_{p=1}^{r_{D_{1}}}\sum_{q=1}^{r_{D_{2}}}( \mathcal{G}_{D})_{(k,p,q)}\] \[\cdot(\mathcal{U}_{D_{1}})_{(p,\phi_{D_{1}}(i,j))}(\mathcal{U}_{ D_{2}})_{(q,\phi_{D_{2}}(i,j))},\]
where the indices \(i=(i_{1},i_{2},\cdots,i_{d})\) and \(j=(j_{1},j_{2},\cdots,j_{d})\) are produced by the mapping function \(\phi_{s}(i,j)\) for frame \(\mathcal{U}_{s}\) with the given \(s\) and \(d\). Furthermore, \(\mathcal{U}_{D_{1}}\) and \(\mathcal{U}_{D_{2}}\) can be recursively calculated by:
\[(\mathcal{U}_{s})_{(k,\phi_{s}(i,j))}= \sum_{p=1}^{r_{s_{1}}}\sum_{q=1}^{r_{s_{2}}}(\mathcal{G}_{s})_{(k,p,q)}.\] \[(\mathcal{U}_{s_{1}})_{(p,\phi_{s_{1}}(i,j))}(\mathcal{U}_{s_{2}}) _{(q,\phi_{s_{1}}(i,j))},\]
where \(D=\{1,2,\cdots,d\}\), \(D_{1}=\{1,\cdots,\lfloor d/2\rfloor\}\) and \(D_{2}=\{\lceil d/2\rceil,\cdots,d\}\) are associated with the left and right child nodes of the root node.
### _Tensor Decomposition Layer_
After reshaping the weight matrix \(W\) and the input vector \(x\) into higher-order tensors and decomposing the weight tensor into tensor decomposed representations using the four different Tensor Decomposition methods as described above, the output tensor \(\mathcal{Y}\) can be computed by manipulating \(\mathcal{W}\) and \(\mathcal{X}\). The final output vector \(y\) can be obtained by reshaping the output tensor. The whole calculation from the input vector to the output vector can be denoted as the tensor decomposition layer (TDL):
\[y=TDL(W,x),\]
where TDL stands for one of Tensor Train Layer [31], Tensor Ring Layer [32], Block-Term Layer [33] and HT Layer [34] depending on the tensor decomposition method used.
Tensor-decomposed RNN models can be obtained by replacing the multiplication between the weight matrix \(W_{h}\) and input vector \(x\) with TDL. The hidden state \(h_{t}\) at time \(t\) can be computed as:
\[h_{t}=\sigma(TDL(W_{h},x_{t})+U_{h}h_{t-1}+b),\]
where \(\sigma(\cdot)\), \(W_{h}\) and \(U_{h}\) denote the activation function, the input-to-hidden layer weight matrix and the hidden-to-hidden layer matrix, respectively.
Replacing the multiplications between weight matrices and the input vector with TDL in the standard LSTM leads to:
\[f_{t} =\sigma(TDL(W_{f},x_{t})+U_{f}h_{t-1}+b_{f})\] \[i_{t} =\sigma(TDL(W_{i},x_{t})+U_{i}h_{t-1}+b_{i})\] \[o_{t} =\sigma(TDL(W_{o},x_{t})+U_{o}h_{t-1}+b_{o})\] \[g_{t} =tanh(TDL(W_{g},x_{t})+U_{g}h_{t-1}+b_{g})\] \[c_{t} =f_{t}\circ c_{t-1}+i_{t}\circ g_{t}\] \[h_{t} =o_{t}\circ tanh(c_{t}),\]
where \(\circ\), \(\sigma(\cdot)\) and \(tanh(\cdot)\) denote the Hadamard product (element-wise product), sigmoid function and the hyperbolic tangent function, respectively. The parameters \(f_{t}\), \(i_{t}\), \(o_{t}\), \(g_{t}\), \(c_{t}\) and \(h_{t}\) denote the forget gate's activation vector, input gate's activation vector, output gate's activation vector, cell input activation vector, cell state vector and hidden state vector, respectively. Note that \(W_{*}\), \(U_{*}\) and \(b_{*}\) (where \(*\) can be \(f\), \(i\), \(o\) or \(g\)) are weight matrices and bias vector parameters which need to be learned during training. Back Propagation Through
Time (BPTT) is used to compute the gradient of RNN [64]. Following the regular RNN backpropagation processing, the gradient \(\frac{dL}{Qy}\) is computed by the original BPTT algorithm, where \(y=Wx_{t}\). Using the corresponding tensorization operation same to \(y\), we can get the tensorized gradient \(\frac{dL}{Qy}\). All the tensorized RNN/LSTM models can be trained end-to-end directly. More details can be found in [31, 32, 33, 34].
## V Tensorizing Transformers
In 2017, a team at Google Brain introduced the Transformer [65] that gradually has become the preferred model for solving NLP problems, replacing RNN models such as LSTM [66]. In this section, we review two tensor decomposition methods applied on Transformers: tensorized embedding layers [38] and multi-linear attention [39] that compress the embedding layers and the multi-head attention in Transformer, respectively.
### _Multi-Linear Attention_
As shown in Fig. 14, the original Transformer model utilized an encoder-decoder architecture. The encoder is made up of encoding layers that process the input successively, while the decoder consists of decoding layers that perform the same as the output of the encoder. Each encoder layer's function is to create encodings that contain information about which parts of the input are mutually relevant. Those encodings are passed to the next layer of the encoder as inputs. The reverse is done by each decoder layer, which takes all encodings and generates an output sequence utilizing the contextual information. Both encoder and decoder layers make use of an attention mechanism.
The attention function is called "Scaled Dot-Product Attention". It computes an output as a weighted sum of the values, with each value's weight determined by the compatibility function of the query with the corresponding key. In practice, the attention function is computed on a set of queries simultaneously by packing queries, keys and values into matrices \(Q\), \(K\) and \(V\), respectively. Then the matrix of outputs can be computed as [65]:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d}})V,\]
where \(d\) is the number of columns of \(Q\) and \(K\). Instead of performing a single attention function, it is beneficial to linearly project the queries, keys and values \(h\) times using distinct linear projections. The attention function is then applied concurrently to each of these projected versions of queries, keys, and values. The final values of the attention function with different inputs are then concatenated and projected to generate the final values. Multi-head attention allows the model to learn from different representations at different positions. It can be expressed as follows [65]:
\[MultiHead(Q,K,V)=Concat(head_{1},\cdots,head_{h})W^{O}\] \[\text{where }head_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}),\]
where matrices \(W_{i}^{Q}\), \(W_{i}^{K}\) and \(W_{i}^{V}\) have the same size of \(d_{model}\times d\) and \(W^{O}\in\mathbb{R}^{hd\times d_{model}}\).
Fig. 15(left) shows the schematic diagram of Single-block attention based on the Tucker decomposition. The query, key and value can be mapped into three-factor matrices (\(Q\), \(K\) and \(V\)) where each factor matrix is composed of three groups of orthogonal basis vectors. Then a new attention can be constructed by initializing a 3rd-order diagonal tensor \(\mathcal{G}\) as shown in Fig. 15(left) where \(R\), \(N\) and \(d\) represent the rank of the tensor, the length of the input sequence and the dimension of the matrix, respectively. The Tucker-decomposition inspired Single-block attention can be computed as [39]:
\[Atten_{TD}(\mathcal{G};Q,K,V)=\sum_{i=1}^{I}\sum_{j=1}^{J}\sum_{m=1}^{M} \mathcal{G}_{ijm}Q_{i}\circ K_{j}\circ V_{m},\]
where \(\mathcal{G}\) is the trainable core tensor and \(\circ\) represents the outer product. \(Q_{i}\), \(K_{j}\) and \(V_{k}\) represent the column vectors of matrices \(Q\), \(K\) and \(V\), respectively. Parameters \(i\), \(j\) and \(m\) are the indexes of the core tensor. In practice, the core tensor \(\mathcal{G}\) can be defined as [39]:
\[\mathcal{G}_{ijk}=\begin{cases}rand(0,1)&i=j=m\\ 0&otherwise\end{cases}\]
where \(rand(0,1)\) represent a random function. Then the Scaled Dot-Product attention can be computed by summing over the output of the Single-block attention function according to the second index as follows [39]:
\[Attention(Q,K,V)_{i,m}=\sum_{j=1}^{N}Atten_{TD}(\mathcal{G};Q,K,V)_{i,j,m},\]
where \(i\), \(j\) and \(m\) represent the indices of the output of the single-block attention.
Fig. 14: Model architecture of the Transformer [65].
In order to compress the multi-head function, multi-linear attention was proposed utilizing the idea of parameter sharing. As shown in Fig. 15(right), a set of linear projections map queries, keys and values to three matrices that can be composed of basis vectors. Then the multi-linear attention can be computed as follows [39]:
\[MultiLinear(\mathcal{G};Q,K,V) =SplitConcat(\frac{1}{h}*(T_{1}+\cdots+T_{h}))W^{O}\] \[\text{where }T_{j} =Atten_{TD}(\mathcal{G}_{j};QW^{q},KW^{k},VW^{v}),\]
where the core tensor \(\mathcal{G}_{j}\) is a diagonal tensor where \(j\in\{1,\cdots,h\}\). \(SplitConcat(\cdot)\) performs the concatenation following splitting. Matrices \(W^{q}\), \(W^{k}\) and \(W^{v}\) are the parameter matrices that are shared.
### _Tensorized Embedding Layers_
As shown in Fig. 14, the embedding layers convert input words into vectors in NLP tasks. However, large vocabulary results in enormous weight matrices, which prevents their use in situations with constrained resources. A parameter-efficient embedding layer, TT-embedding, was introduced [38]. The TT-embedding utilizes the Tensor Train decomposition by decomposing the huge embedding matrix into a sequence of much smaller 2-dimensional and 3-dimensional tensors. The method is the same as applying the Tensor Train decomposition to the fully connected layers of CNNs as described before.
Let \(X\in\mathbb{R}\) be the embedding matrix of size \(I\times J\) for a vocabulary of size \(I\) and embedding dimension \(J\). Given factorizations of its dimensions \(I=\prod_{k=1}^{N}I_{k}\) and \(J=\prod_{k=1}^{N}J_{k}\), the embedding matrix can be reshaped into a higher order tensor \(\mathcal{X}\in\mathbb{R}^{I_{1}J_{1}\times I_{2}J_{2}\times\cdots\times I_{N}J _{N}}\). Then the high order tensor can be decomposed into a sequence of smaller tensors \(\{\mathcal{G}_{k}\in\mathbb{R}^{R_{k-1}\times I_{k}\times J_{k}\times R_{k}} \}_{k=1}^{N}\). The sequence \(\{R_{k}\}_{k=1}^{N-1}\) represent the TT-ranks that directly affect the compression ratio.
## VI Results
### _Convolutional Neural networks_
In this section, the implementations and results of applying these tensor decomposition methods to some classical CNN models are described. They are categorized based on the tensor decomposition methods.
#### Vi-A1 Tensor Train Decomposition
In [21], Tensor Train decomposition was applied to the convolutional layers and fully connected layers as described before. The proposed method was tested on CIFAR-10 dataset from [67]. Two networks were built: one is dominated by the convolutional layers which occupy \(99.54\%\) parameters of the network and the other is dominated by the fully connected layers which occupy \(95.98\%\) parameters of the network. The first network consists of: conv with \(64\) output channels; batch normalization; ReLU; conv with \(64\) output channels; batch normalization; ReLU; max-pooling of size \(3\times 3\) with stride \(2\); conv with \(128\) output channels; batch normalization; ReLU; max-pooling of size \(3\times 3\) with stride \(2\); conv with \(128\) output channels; batch normalization; ReLU; conv with \(128\) output channels; avg-pooling of size \(4\times 4\); FC of size (\(128\times 10\)), where 'conv' and 'FC' stand for convolutional layer and fully connected layer, respectively. All convolutional filters were of size \(3\times 3\). Each convolutional layer excluding the first one was replaced by the TT-conv layer as mentioned before. The authors also compared the proposed Tensor Train decomposition method for the convolutional layer with the naive approach which directly applies the Tensor Train decomposition to the \(4^{th}\)-order convolutional kernel. As illustrated in TABLE II, the proposed approach can achieve similar accuracies as the naive baseline on the \(2\times\) compression level. The second network was adapted from the first one by replacing the average pooling with two fully connected layers of size \(8192\times 1536\) and \(1536\times 512\). Initially, the fully-connected part was the memory bottleneck. As shown in TABLE III, a \(21.01\times\) network compression with \(0.7\%\) accuracy drop can be achieved by only replacing the fully connected layers with TT-FC layers as described before.
Fig. 15: (left) Single-block attention using Tucker decomposition. (right) Multi-linear attention based on Block-Term tensor decomposition [39].
Then the bottleneck moves to the convolutional part. At this point, by additionally replacing the convolutional layers with the TT-conv layers, an \(82.87\times\) network compression with \(1.1\%\) accuracy drop can be achieved as shown in TABLE III. More details can be found in [21].
#### V-A2 CP Decomposition
In [17], CP decomposition was implemented in two steps: applying CP decomposition to the convolutional layer using the NLS algorithm and then fine-tuning the entire network using backpropagation. Two network architectures were tested: small character-classification CNN from [68] and AlexNet[1]. The character-classification CNN, called CharNet, has four convolutional layers. This CNN was used to classify images of size \(24\times 24\) into one of 36 classes (10 digits + 26 characters). Only the second and the third convolutional layers were compressed since they cost more than \(90\%\) of processing time. The second layer has \(48\) input channels and \(128\) output channels with filters of size \(9\times 9\). The third layer has \(48\) input channels and \(128\) output channels with filters of size \(8\times 8\). First, the second layer was compressed using CP decomposition with rank \(64\). Then all layers but the new ones were fine-tuned to reduce the accuracy drop. Finally, the third layer was approximated using CP decomposition with rank \(64\). Since the last approximation does not lead to a large accuracy drop, there is no need to fine-tune the network after that. The compressed network is \(8.5\) times faster than the original one while the classification accuracy only drops by \(1\%\) to \(90.2\%\). AlexNet is one of the common object recognition networks and it has eight layers consisting of five convolution layers and three fully connected layers. The second convolutional layer of AlexNet was compressed in [17] by using CP decomposition. The running time of the second layer can be accelerated by \(3.6\times\) using a rank of \(200\) at the expense of \(0.5\%\) accuracy degradation or by \(4.5\times\) with a rank of \(140\) at the expense of \(\approx 1\%\) accuracy degradation. Another fact mentioned in [17] is that greedy CP decompositions like ALS work worse than NLS for CNN model compression.
Tensor Power Method (TPM) [69] was used to apply CP decomposition to the convolutional kernels in [15]. Compared to ALS, TPM can achieve the same variance with less rank since the rank-\(1\) tensors found in the early steps of TPM represents most of the variances in the original tensor. TPM compressed the convolutional kernels by adding rank-\(1\) tensors until a predefined number of rank-\(1\) tensors is found. First, TPM finds a rank-\(1\) tensor \(\mathcal{K}_{1}\) by minimizing \(||\mathcal{K}-\mathcal{K}_{1}||_{2}\). The next iteration approximates the residual tensor \(\mathcal{K}_{residual}=\mathcal{K}-\mathcal{K}_{1}\) by minimizing \(||\mathcal{K}_{residual}-\mathcal{K}_{2}||_{2}\). This continues until \(\mathcal{K}_{R}\) is found. More details can be found in [69]. It is the first time that CP-based decomposition is applied to the whole convolutional layer in [15]. AlexNet was used here. The authors overcome the instability of CP decomposition by fine-tuning after each layer's decomposition. The fully connected layers were decomposed using SVD as described before. Fig. 16 from [15] shows that decomposition and fine-tuning are performed iteratively from Conv1 to FC8. Black solid arrows represent the connection between each layer. Red dotted lines represent the decomposition processes. Black dotted lines show that the weights do not change from the previous iteration. Purple block arrows represent fine-tuning by backpropagation to the whole network. The rank of each layer was set to be proportional to its sensitivity which is defined as _loss/total_loss_. This method achieved \(6.98\times\) parameter reduction and \(3.53\times\) running time reduction with the expense of \(1.42\%\) accuracy loss.
#### V-A3 Tucker Decomposition
In [19], one-shot Tucker Decomposition on the whole network consists of three steps: rank selection using VBMF, Tucker decomposition on each layer's kernel tensor and one-shot fine-tuning the whole network with standard back-propagation. Fig. 17 from [19] shows the whole scheme. The accuracy significantly dropped after step two but recovered quickly in one epoch. Four representative CNNs, AlexNet, VGG-S, GoogLeNet and VGG-16 were compressed using Tucker decomposition in [19]. For GoogLeNet, only the 3 \(\times\) 3 convolution kernel was compressed in the case of inception module. For VGG-16, only the convolutional layers were compressed. This method achieved \(5.46\times\) / \(2.67\times\) (AlexNet), \(7.40\times\) / \(4.80\times\) (VGG-S), \(1.28\times\) / \(2.06\times\) (GoogLeNet) and \(1.09\times\) / \(4.93\times\) (VGG-16) reductions in total weights and FLOPs, respectively.
### _Recurrent Neural Networks_
In this section, the results of applying different tensor decomposition methods to RNN models are described.
To better understand the impact of different tensor decomposition methods on RNNs, the time complexity and space complexity of different RNN models are summarized in Table IV. Notice that HT-RNN has the lowest space complexity and TT-RNN has the lowest time complexity.
The UCF11 YouTube Action dataset [70] contains \(1600\) video clips of a resolution \(320\times 240\), falling into \(11\) action
categories (e.g., basketball, biking, diving, etc.). Each category contains \(25\) video groups, within each contains more than \(4\) clips. Table V summarizes the performance of different tensor-decomposed LSTM models on UCF11 dataset. Notice that HT-LSTM requires at least \(1.38\times\) fewer parameters with at least \(0.3\%\) increase in accuracy compared to the other tensor-decomposed LSTM models.
### _Transformers_
In this section, the results of applying different tensor decomposition methods to Transformers are described.
#### Iv-C1 Multi-Linear Attention
The proposed attention method was tested on three language modeling tasks (PTB, WikiText-103 and One-billion) and a neural machine translation task (WMT-2016 English-German) [39].
Language modeling is the task of predicting the next word or character in a document. It can be used to generate text or further fine-tuned to solve different NLP tasks [36]. Three datasets were chosen: one of small size (PTB), one of medium size (WikiText-103) and one of large size (One-billion). PTB contains 929,900 training tokens, 73,900 validation words, and 82,900 test words [71]. WikiText-103 has 267,735 distinct
Fig. 16: CP-TPM method in AlexNet [15]. The black solid arrow represents the connection between layers. The red dotted line stands for the decomposition process and the black dotted line means that the weights are taken from the previous iteration. The purple block arrow stands for fine-tuning by backpropagation to all the layers. First, Conv is decomposed into \(2\) layers while the others remain the same. Then, fine-tuning is performed on the whole network. Afterward, Conv2 is decomposed into \(3\) layers and then fine-tuning is performed on the whole network followed. The process repeats until all the layers are decomposed and fine-tuned.
Fig. 17: One-shot whole network compression based on Tucker decomposition [19]. Tucker-2 decomposition is applied from the second convolutional layer to the first fully connected layers while Tucker-1 decomposition is applied to the other layers.
tokens. The dataset is a long-term dependency word-level language modeling benchmark. It contains 103 million training tokens from 28 thousand articles, with an average length of 3.6 thousand tokens per article. The One-Billion Word benchmark is a large dataset with 829,250,940 tokens over a vocabulary of 793,471 words. Models were evaluated based on the average per-word log-probability, Perplexity (PPL). The lower the PPL, the more accurate the model. The standard multi-head attention layers in Transformer were replaced with Multi-linear attention. A comparison of different model configurations on different datasets is shown in Table VI and Table VII. Notice that the tensorized transformer with Multi-linear attention achieves lower PPL with much fewer parameters than other models in the three datasets.
Neural Machine Translation involves translating text or speech from one language to another. The baseline is a vanilla Transformer trained on WMT 2016 English-German dataset [83]. For comparison, each of the attention layers was replaced with Multi-linear attention in the Encoder of the Transformer. The results are summarized in Table VIII. Notice that tensorized transformers with Multi-linear attention achieve better performance with fewer parameters than the vanilla Transformer.
#### Vi-B2 Tensorized Embedding Layers
The proposed TT-embedding layer was tested on two language modeling tasks (PTB and WikiText-103) and a machine translation task (WMT 2014 English-German). As shown in Table VII, Transformer-XL+TT stands for the transformers with TT-embedding layers. Compared to the Transformer with Multi-linear attention, Transformer-XL+TT can not achieve that high compression ratio. For the machine translation task, the baseline is a Transformer-big model on WMT 2014 English-German dataset [65]. This dataset has around 4.5 million sentence pairs. The results are summarized in Table IX. Notice that the embedding layers can be compressed significantly at the cost of a small drop in the BLEU scores.
## VII Conclusion
This paper has summarized different tensor decomposition methods that are used to compress model parameters of CNNs, RNNs and Transformers. In particular, three decomposition methods for CNNs, four decomposition methods for RNNs and two decomposition methods for Transformers have been reviewed. Finding the best rank for the low-rank approximation remains a challenge and has to be found by trial and error. The model compression ratio is dependent on the rank approximation and the performance degradation that can be tolerated. For RNNs, the model compression ratio is much higher than those of CNNs. Additionally, the accuracy of the compressed RNN model can be better than the original model. All these models are trained end-to-end using backpropagation instead of being obtained by applying tensor decomposition methods to the pre-trained standard models. It is possible for them to achieve higher accuracy than the standard models with fewer parameters.
Future work needs to be directed toward comparing the performance of various decomposition methods using common datasets. Hardware-aware Tucker Decomposition was proposed to efficiently generate highly accurate and compact CNN models on GPUs [85]. Algorithm and Hardware Co-Design of high-performance energy-efficient LSTM networks was introduced based on Hierarchical Tucker Tensor Decomposition [86]. Future research also needs to be directed toward the evaluation of the hardware performance of these decomposition methods.
|
2307.10560 | Post-variational quantum neural networks | Hybrid quantum-classical computing in the noisy intermediate-scale quantum
(NISQ) era with variational algorithms can exhibit barren plateau issues,
causing difficult convergence of gradient-based optimization techniques. In
this paper, we discuss "post-variational strategies", which shift tunable
parameters from the quantum computer to the classical computer, opting for
ensemble strategies when optimizing quantum models. We discuss various
strategies and design principles for constructing individual quantum circuits,
where the resulting ensembles can be optimized with convex programming.
Further, we discuss architectural designs of post-variational quantum neural
networks and analyze the propagation of estimation errors throughout such
neural networks. Finally, we show that empirically, post-variational quantum
neural networks using our architectural designs can potentially provide better
results than variational algorithms and performance comparable to that of
two-layer neural networks. | Po-Wei Huang, Patrick Rebentrost | 2023-07-20T03:55:53Z | http://arxiv.org/abs/2307.10560v2 | # Post-variational quantum neural networks
###### Abstract
Quantum computing has the potential to provide substantial computational advantages over current state-of-the-art classical supercomputers. However, current hardware is not advanced enough to execute fault-tolerant quantum algorithms. An alternative of using hybrid quantum-classical computing with variational algorithms can exhibit barren plateau issues, causing slow convergence of gradient-based optimization techniques. In this paper, we discuss _"post-variational strategies"_, which shift tunable parameters from the quantum computer to the classical computer, opting for ensemble strategies when optimizing quantum models. We discuss various strategies and design principles for constructing individual quantum circuits, where the resulting ensembles can be optimized with convex programming. Further, we discuss architectural designs of post-variational quantum neural networks and analyze the propagation of estimation errors throughout such neural networks. Lastly, we show that our algorithm can be applied to real-world applications such as image classification on handwritten digits, producing a 96% classification accuracy.
## I Introduction
Variational quantum methods [1] are proposed to solve optimization problems in chemistry [2], combinatorial optimization [3] and machine learning [4] with a potential quantum advantage in the NISQ regime [5]. These methods often use hardware-efficient Ansatze that are problem-agnostic [6]. However, many Ansatze face the well-studied barren plateau problem [7]. Several methods are suggested to alleviate this problem, including parameter initialization that form block identities [8], layerwise training [9], pretraining with classical neural networks [10], and the usage of specific architectures such as quantum convolutional neural networks [11; 12].
Partially inspired by the barren plateau problem, Huang _et al._[13] considered the solving of linear systems with near-term quantum computers with the use of _classical combinations of quantum states_ (CQS). Here, based on the input matrix, a so-called Ansatz tree is constructed which allows for a solution with provable guarantees and also for the systematic application of heuristic methods. The iterative quantum eigensolver [14] builds quantum models by assuming the model to be a linear combination of unitaries and iteratively adding terms to the model until a close enough estimate is produced. Further, these concepts have also found use in quantum algorithms for semidefinite programming [15]. It is interesting to consider these concepts for other settings and problems such as neural networks for machine learning.
In our work, we discuss strategies that rely on the variational method of finding approximations in quantum mechanics as the theoretical basis for optimization but avoid the usage for parameterized quantum states. These strategies are part of what can be called _"post-variational methods"_. In our post-variational methods, we take the classical combination of multiple fixed quantum circuits and find the optimal combination though solving a classical convex optimization problem. We show that our post-variational methods benefit from the much shallower and simpler construction of their quantum circuits compared to both variational and fault-tolerant algorithms. As no variational parameters are used, the barren plateau issue of vanishing gradients can be avoided. Furthermore, we show a possible reduction in quantum measurements when compared to variational algorithms as our protocol does not require the iterative update of variational algorithms and can be further reduced to one-shot random measurements when the classical shadows protocol is applied [16]. In particular, we discuss the use of classical combinations of quantum circuits regarding quantum machine learning and neural networks. While our methods can consume more quantum resources than variational algorithms, they could provide a viable alternative in the NISQ regime (Figure 1).
## II Preliminaries
Notations.Given a field \(\mathbb{F}\) of either real or complex numbers, for vector \(v\in\mathbb{F}^{N}\), we denote \(\|v\|_{p}\) as its \(\ell_{p}\) norm. Given a collection/set of scalars \(S\) over field \(\mathbb{F}\), we can use the operation \(\operatorname{vec}(S)\) to vectorize the collection. We denote \([h]\) to be the set \(\{1,2,\cdots,h\}\).
For a matrix \(A\in\mathbb{F}^{M\times N}\), let \(A_{ij}\) be the \((i,j)\)-element of \(A\). We denote the spectral norm to be \(\|A\|_{2}=\max_{i}\sigma_{i}(A)\) and the \(\max\) norm to be \(\|A\|_{\max}=\max_{ij}|A_{ij}|\), where \(\sigma_{i}(A)\) are singular values of \(A\). Furthermore, we denote \(\sigma_{\min}(A)\) to be the smallest non-zero singular value of \(A\). We also denote \(A^{+}\) to be the Moore-Penrose inverse of \(A\)[17]. We note that \(\|A^{+}\|_{2}=\frac{1}{\sigma_{\min}(A)}\).
Classical shadows.The classical shadows method [16] introduces a randomized protocol to estimate the value of \(\operatorname{tr}(O_{i}\rho)\) over \(M\) observables up to additive error \(\epsilon\) with
\(O(\log(M))\) measurements. The quantum state is first evolved by a random unitary \(U\) selected from a tomographically complete set of unitaries, i.e. \(\rho\to U\rho U^{\dagger}\). Measuring the transformed quantum state in the computational basis, a string of outcomes \(b\in\{0,1\}^{n}\) can be produced. One can then construct and store \(U^{\dagger}|b\rangle\langle b|U\) classically using the stabilizer formalism [18], the expectation of which can be viewed as a quantum channel \(\mathcal{M}\), i.e., \(\mathbb{E}[U^{\dagger}|b\rangle\langle b|U]=\mathcal{M}(\rho).\) By inverting the quantum channel and repeating the above process multiple times, we can obtain multiple "_classical shadows_" \(\hat{\rho}\) of the density matrix \(\rho\) such that \(\hat{\rho}=\mathcal{M}^{-1}(U^{\dagger}|b\rangle\langle b|U).\) These classical shadows can be used to approximate the value of the quantum state \(\mathrm{tr}(O_{i}\rho)\) against a series of observables \(O_{1},O_{2},\cdots,O_{M}\) via the median-of-means estimator [19]. The number of classical shadows required to estimate \(M\) observables within an additive error of \(\epsilon\) is \(O(\log M\max_{i}\|O_{i}\|_{\mathrm{shadow}}^{2}/\epsilon^{2})\), where \(\|O_{i}\|_{\mathrm{shadow}}^{2}\), or the shadow norm, is dependent on the unitary ensemble used. When using tensor products of single-qubit Clifford gates (which is equivalent to measurement on a Pauli basis), \(\|O_{i}\|_{\mathrm{shadow}}^{2}\leq 4^{\mathcal{K}}\|O_{i}\|_{2}^{2}\) for observable \(O_{i}\) that acts non-trivially on \(\mathcal{K}\) qubits. For the rest of this paper, we call this particular property of an observable to be its _locality_, and denote the shadow norm as \(\|\cdot\|_{S}\).
Loss functions.In this paper, we also refer to a variety of different loss functions that are used in different machine learning tasks. Given \(d\) data points with ground truth \(\{y_{i}\}_{i=1}^{d}\) and predicted values \(\{\hat{y}_{i}\}_{i=1}^{d}\), where \(\forall i,y_{i},\hat{y}_{i}\in\mathbb{R}\), the root-mean-square error (RMSE) loss is defined as follows: \(\mathcal{L}_{\mathrm{RMSE}}=\sqrt{\frac{1}{d}\sum_{i=1}^{d}(y_{i}-\hat{y}_{i})^ {2}}\), while the mean absolute error (MAE) loss is defined by \(\mathcal{L}_{\mathrm{MAE}}=\frac{1}{d}\sum_{i=1}^{d}|y_{i}-\hat{y}_{i}|\). In the case where the ground truth is binary, i.e. \(\forall i,y_{i}\in\{0,1\}\) and predictions \(\hat{y}_{i}\in[0,1]\), the binary cross-entropy (BCE) loss is defined as \(\mathcal{L}_{\mathrm{BCE}}=\frac{1}{d}\sum_{i=1}^{d}-y_{i}\log(\hat{y}_{i})-( 1-y_{i})\log(1-\hat{y}_{i})\).
Furthermore, given \(A\in\mathbb{C}^{N\times N},b\in\mathbb{C}^{N}\), we have a task to find a solution vector \(x\in\mathbb{C}^{N}\) for linear system \(Ax=b\) with quantum computation. Constructing estimator \(\hat{x}\) of \(x\), one can define the Hamiltonian loss as \(\langle\hat{x}|A^{\dagger}(\mathbb{I}-|b\rangle\langle b|)A|\hat{x}\rangle.\)[13; 20]
## III The post-variational method
Problem setting.We are given a dataset, \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{d}\), where \(d\) is the number of data, the feature vectors are \(x_{i}\in\mathbb{R}^{\ell}\), with \(\ell\) being the number of features, and the labels \(y_{i}\in\mathbb{R}\). Our task is to learn an estimator \(\mathcal{E}_{\vartheta}(x_{i})\) via parameterized neural networks with parameters \(\vartheta\) that make use of quantum circuits such that \(\hat{y}_{i}:=\mathcal{E}_{\vartheta}(x_{i})\). We aim to minimize the difference between the estimator \(\hat{y}_{i}\) and ground truth \(y_{i}\) over all data with a given loss function \(\mathcal{L}\).
Variational methods.Variational quantum algorithms have been regarded as the analogue to neural networks in quantum systems, and are also referred to as quantum neural networks (QNNs) when applied to machine learning tasks. Referring to a class of circuit-centric variational quantum algorithms operating on pure states [21; 22], such algorithms operate by first encoding data \(x\) into a \(n\)-qubit quantum state \(\rho(x)\in\mathbb{C}^{2^{n}\times 2^{n}}\). The quantum state is then transformed by an Ansatz \(U(\theta)\), with parameters \(\theta\in\mathbb{R}^{k}\), to form a trial quantum state
\[\rho_{\mathrm{trial}}(\theta,x)=U(\theta)\rho(x)U^{\dagger}(\theta). \tag{1}\]
We can then construct the estimator with an observable \(O\) such that
\[\mathcal{E}_{\theta}(x):=\mathrm{tr}(O\rho_{\mathrm{trial}}(\theta,x)). \tag{2}\]
The parameters \(\theta\) are optimized by evaluating gradients of the quantum circuit via parameter-shift rules [4; 23] and calculating updates of the parameter via gradient-based optimization on classical computers.
Apart from the simple construction of variational algorithms stated above that uses a single layer of data encoding followed by a variational Ansatz, there are variational algorithms make use of alternating data encoding layers and parameterized Ansatze, also known as data re-uploading models [24]. Such models are shown to have an exact mapping to the simpler construction of variational algorithms as shown above [25], albeit with an exponential increase of qubits. In our work, as we do not specify the data encoding layer, we discuss the simpler construction by Schuld _et al._[21] as a general structure that encompasses the mapped versions of data re-uploading models as well.
The post-variational method.Inspired by the CQS approach [13], we propose two _post-variational methods_ and a hybrid version of them. For the first method, we take classical combinations of various fixed quantum Ansatze in replacement of a single parameterized Ansatz.
Figure 1: A sketch of quantum algorithm regimes. The noisy intermediate quantum regime (NISQ) was defined by Preskill [5] to be a certain range for both qubit numbers and gate errors (hence a rectangle in this graphic). Regimes with only higher qubits or only lower errors are outside NISQ as the other metric (error rate or qubit number, respectively) is not sufficient to harness the additional resources. Variational algorithms are the main candidates for the NISQ regime and post-variational algorithms reside in the higher-qubit part of NISQ, but avoid barren plateaus and have certain provable guarantees. Fault-tolerant algorithms reside in a range of a large number of qubits and low errors.
For the second method, this idea can be generalized to _classical combinations of quantum observables_ (CQO) by combining the Ansatz \(U(\theta)\) and observable \(O\) into a single parameterized observable \(\mathcal{O}(\theta)\) and replacing this observable with a collection of predefined trial observables \(\mathcal{O}_{1},\mathcal{O}_{2},\cdots,\mathcal{O}_{m}\). For both cases, measurement results on the quantum circuits are then combined classically, where the optimal weights of each measurement is computed via classical neural networks over a convex optimization problem. See Figure 2 for a high-level sketch of the idea behind post-variational methods.
_Classical combinations of quantum observables._ Starting from our variational observable \(O\), we can combine the parameterized Ansatz with the observable to obtain a parameterized observable \(\mathcal{O}(\theta):=U^{\dagger}(\theta)OU(\theta)\). The expectation values are the same as \(\operatorname{tr}(O_{\text{trial}}(\theta,x))=\operatorname{tr}(\mathcal{O}( \theta)\rho(x))\). Therefore, instead of optimizing over parameterized trial quantum states, we can optimize over a parameterized observable to achieve the same effect. As any observable can be expressed as a linear combination of Hermitians, one can express the observable \(\mathcal{O}(\theta)\) as linear combinations weighted by functions \(\mathcal{F}_{j}:\mathbb{R}^{k}\rightarrow\mathbb{R}\) of \(\theta\) such that
\[\mathcal{O}(\theta)=\sum_{j=1}^{M}\mathcal{F}_{j}(\theta)\mathcal{O}_{j}, \tag{3}\]
where \(M\) is upper bounded by \(4^{n}\). We leave the construction of such decompositions to Appendix A.
Noting that the estimator can be written as follows, \(\mathcal{E}_{\theta}(x)=\operatorname{tr}(\mathcal{O}(\theta)\rho(x))=\sum_{ j=1}^{M}\mathcal{F}_{j}(\theta)\operatorname{tr}(\mathcal{O}_{j}\rho(x))\), we can model such systems by considering the entire system as a function \(\mathcal{H}:\mathbb{R}^{k}\times\mathbb{R}^{M}\rightarrow\mathbb{R}\) of \(\theta\) and the \(M\) terms of \(\operatorname{tr}(\mathcal{O}_{j}\rho(x))\) such that
\[\mathcal{E}_{\theta}(x)=\mathcal{H}_{\theta}(\operatorname{vec}(\{ \operatorname{tr}(\mathcal{O}_{j}\rho(x))\}_{j=1}^{M})). \tag{4}\]
By universal approximation theorem [26], we can approximate function \(\mathcal{H}_{\theta}\) classically through neural network model \(\mathcal{G}_{\alpha}\) parameterized by classical parameters \(\alpha\), such that
\[\mathcal{E}_{\theta}(x)\approx\mathcal{G}_{\alpha}(\operatorname{vec}( \{\operatorname{tr}(\mathcal{O}_{j}\rho(x))\}_{j=1}^{M}))=\mathcal{E}_{\alpha} (x). \tag{5}\]
Under this framework, the estimator can be further extended to simulate non-linear systems.
In simple linear cases, a concrete procedure for creating classical combinations of quantum observables is as follows. Make a first approximation (I) and consider only a subset \(\mathcal{S}\) of size \(|\mathcal{S}|=m\) of the trial observables. This leads to
\[\mathcal{O}(\theta)\approx\sum_{j:\mathcal{O}_{j}\in\mathcal{S}}\mathcal{F}_{ j}(\theta)\mathcal{O}_{j}. \tag{6}\]
We then make our second approximation (II) and consider all the functions independently. Each function \(\mathcal{F}_{j}(\theta)\) is considered as a classical parameter \(\alpha_{j}\in\mathbb{R}\). This leads to
\[\mathcal{O}(\theta)\rightarrow\mathcal{O}(\alpha):=\sum_{j:\mathcal{O}_{j}\in \mathcal{S}}\alpha_{j}\mathcal{O}_{j}. \tag{7}\]
Hence, we obtain a linear combination of observables, which, assuming the approximations are chosen judiciously, will contain much of the initial complexity.
Figure 2: High-level sketch of post-variational strategies for near-term quantum computing. The variational method uses parameterized quantum circuits to transform embedded input data before suitable measurements, see panel a). Post-variational methods, see panel b), use multiple fixed circuits, which may share similar circuit structure as the variational circuits, and multiple measurements of these circuits. The goal is to achieve approximately similar accuracy as the variational methods, while only performing optimization of classical parameters and retaining the power of quantum embeddings.
Comparisons with CQS.The CQS approach [13] was originally proposed to solve linear systems of equations of \(Ax=b\), where the solution of \(x\) can be found by taking the Moore-Penrose inverse of \(A\) such that \(x=A^{+}b\). With the variational approach, \(A^{+}\) is modeled by a variational Ansatz \(U(\theta)\), while the CQS approach takes classical combinations of fixed "problem-inspired" Ansatze \(U_{1},U_{2},\cdots,U_{m_{\text{CQS}}}\) generated by the matrix \(A\) in conjunction with an Ansatz tree. An estimator \(\hat{A}^{+}\) of \(A^{+}\) is constructed such that \(\hat{A}^{+}=\sum_{i^{\prime}=1}^{m_{\text{CQS}}}\gamma_{i^{\prime}}U_{i^{ \prime}}\), where \(\gamma_{i}\in\mathbb{R}_{>0}\).
We note that the CQS approach can be viewed as a problem-inspired analogue to the post-variational method when viewing the problem under the Hamiltonian loss as opposed to the MSE loss used by Huang _et al._[13]. In particular, we see that the loss term can be formulated as follows:
\[\mathcal{L}_{\text{Hamiltonian, CQS}}\] \[=\operatorname{tr}\left(\left(\sum_{i^{\prime}=1}^{m_{\text{CQS }}}\gamma_{i^{\prime}}U_{i^{\prime}}^{\dagger}\right)O\left(\sum_{j^{\prime}= 1}^{m_{\text{CQS}}}\gamma_{j^{\prime}}U_{j^{\prime}}\right)|b\rangle\langle b|\right) \tag{8}\] \[=\sum_{i^{\prime}=1}^{m_{\text{CQS}}}\gamma_{i^{\prime}}^{2} \operatorname{tr}(U_{i^{\prime}}^{\dagger}OU_{i^{\prime}}|b\rangle\langle b|)\] \[\quad+\sum_{i^{\prime}=1}^{m_{\text{CQS}}}\sum_{j^{\prime}\neq i ^{\prime}}\gamma_{i^{\prime}}\gamma_{j^{\prime}}\operatorname{tr}\left(\frac{ (U_{i^{\prime}}^{\dagger}OU_{j^{\prime}})+(U_{j^{\prime}}^{\dagger}OU_{i^{ \prime}})}{2}|b\rangle\langle b|\right), \tag{9}\]
where \(O=A^{\dagger}(\mathbb{I}-|b\rangle\langle b|)A\).
Collecting the terms and corresponding coefficients of \(U_{i^{\prime}}^{\dagger}OU_{i^{\prime}}\) and \(\frac{1}{2}((U_{i^{\prime}}^{\dagger}OU_{j^{\prime}})+(U_{j^{\prime}}^{\dagger }OU_{i^{\prime}}))\) into predefined trial observables \(\mathcal{O}_{j}\) and their corresponding coefficients \(\alpha_{j}\in\mathbb{R}\), we find that we can write the Hamiltonian loss of the CQS approach as the mean-absolute error (MAE) loss of the post-variational approach as
\[\mathcal{L}_{\text{Hamiltonian, CQS}} =\sum_{j=1}^{m}\alpha_{j}\operatorname{tr}(\mathcal{O}_{j}|b \rangle\langle b|) \tag{10}\] \[=\left|0-\sum_{i=1}^{m}\alpha_{j}\operatorname{tr}(\mathcal{O}_ {j}|b\rangle\langle b|)\right|\] (11) \[=\mathcal{L}_{\text{MAE, Post-Variational}} \tag{12}\]
where we note that \(0\) serves as the ground truth of the MAE loss as \(\langle x|A^{\dagger}(\mathbb{I}-|b\rangle\langle b|)A|x\rangle=0\). Here, \(m=m_{\text{CQS}}^{2}\) by counting the different terms.
## IV Design principles of post-variational quantum circuits
We now discuss multiple possible heuristic strategies to decide on the trial observables \(\mathcal{O}_{j}\) and construct the circuits for our post-variational algorithm to minimize operations on quantum computers. See Figure 3 for an overview of the heuristic strategies. Recall that we construct a post-variational algorithm by ensembling multiple trial observables such that the final target observable can be learned by combination of the measurement results based on a learned function:
\[\mathcal{E}_{\theta}(x)\approx\mathcal{E}_{\alpha}(x)=\mathcal{G}_{\alpha}( \operatorname{vec}(\{\operatorname{tr}(\mathcal{O}_{j}\rho(x))\}_{j:\mathcal{ O}_{j}\in\mathcal{S}})), \tag{13}\]
where for linear cases
\[\mathcal{G}_{\alpha}(\operatorname{vec}(\{\operatorname{tr}(\mathcal{O}_{j} \rho(x))\}_{j:\mathcal{O}_{j}\in\mathcal{S}}))=\sum_{j:\mathcal{O}_{j}\in \mathcal{S}}\alpha_{j}\operatorname{tr}(\mathcal{O}_{j}\rho(x)). \tag{14}\]
Ansatz expansion.The first strategy to constructing post-variational algorithms is to first begin with a variational algorithm and replace the parameterized Ansatz \(U(\theta)\) with an ensemble of \(p\) fixed Ansatze \(\{U_{a}\}_{a=1}^{p}\). Huang _et al._[13]'s CQS approach uses this strategy to solve linear systems, generating the fixed problem-inspired Ansatze through the usage of an Ansatz tree.
For quantum machine learning tasks, such problem-inspired Ansatze may not be readily available, hence we generate the fixed Ansatze from a problem-agnostic Ansatz. Starting from the initialized parameters \(\theta^{(0)}\) of a parameterized Ansatz \(U(\theta)\), we can perform Taylor expansion of the Ansatz at \(\theta^{(0)}\) such that
\[U(\theta) =U(\theta^{(0)})+\sum_{u=1}^{k}\Delta\theta(\partial_{\theta_{u}}U )(\theta^{(0)})\] \[+\sum_{u=1}^{k}\sum_{v=1}^{k}\frac{1}{2!}\Delta\theta_{u}\Delta \theta_{v}(\partial_{\theta_{u}}\partial_{\theta_{v}}U)(\theta^{(0)})+\cdots, \tag{15}\]
and combine the gradient, Hessian, and higher-order derivatives of the original Ansatz classically. See Figure 4 for a representation of such quantum circuits.
For implementations on quantum hardware, it may be difficult to take the direct expansion of the Ansatz itself, hence we instead take the derivatives of the parameterized observable \(\mathcal{O}(\theta)\) via parameter-shift rules [4, 23], which can also be extended to derivatives of higher-orders [27, 28]. However, the order of derivatives we can take is limited as the number of quantum circuits required to obtain high-order derivative via parameter-shift rules scale exponentially with the order of the derivative.
Note that if the initial parameter \(\theta^{(0)}\) falls within a barren plateau, taking higher-order derivatives do not help with the escape from the barren plateau [29], hence we rely instead on suitable initializations of \(\theta^{(0)}\) to avoid such problems [8]. Such initializations guarantee non-triviality of gradients at the initial point and can be extended to higher-order derivative, allowing such strategies to be more useful in the post-variational setting rather than variational settings, as such guarantees cannot be extended to later stages of the gradient descent process.
Observable construction.In contrast to the Ansatz expansion strategy, where we generate fixed Ansatze either through Ansatz trees or Taylor expansions, in the
observable construction strategy, we take the CQO strategy discussed in the previous section at face value, decomposing the parameterized observable \(\mathcal{O}(\theta)\) against the basis of quantum observables (namely, Paulis), such that \(\mathcal{O}(\theta)\rightarrow\mathcal{O}(\alpha)=\sum_{P\in\{\mathcal{I}, \mathcal{X},\mathcal{Y},\mathcal{Z}\}^{\otimes n}}\alpha_{P}P\). See Figure 5 for a representation of such quantum circuits.
However, such a method scales exponentially with the number of qubits used in the system, creating an analogue to the barren plateau. Further heuristic selections of observables would be required to prevent such exponential scalings. We find that considering all Pauli observables within a certain locality \(\mathcal{K}\) as a good heuristic given that most physical Hamiltonians are local. Furthermore, recent work by Huang _et al._[30] show that this truncation by locality of Pauli observables, which they refer to as "low-weight approximations", is proven to be a good surrogate for the unknown observable-to-be-learned under the circumstance that the quantum state \(\rho\) that the observable is applied to is sampled from distributions that are invariant under single-Clifford gates. They further show that these low-weight approximations would require only a quasi-polynomial number of samples in relation to the number of qubits \(n\) to learn.
Under the circumstance that the target observable \(\mathcal{O}\) is \(\mathcal{K}\)-local, we can employ the classical shadows protocol [16] to reduce the quantum measurements needed on the quantum computer while achieving the same additive error term \(\epsilon\) determined by the loss term. For each data sample \(x\) such that we can prepare a quantum state \(\rho(x)\), we can obtain a series of classical shadows of the quantum state \(\hat{\rho}(x)\) that can be stored classically such that we can estimate the values of \(\mathrm{tr}\left(P\rho(x)\right)\) where \(P\in\{\mathcal{I},\mathcal{X},\mathcal{Y},\mathcal{Z}\}^{\otimes n}:|P|\leq \mathcal{K}\). The classical shadows protocol is able to reduce the number of measurements required for all values for \(\mathrm{tr}(P\rho(x))\) from a polynomial dependency of the number of qubits to a logarithmic dependency. We discuss this further in Section VI.
The hybrid strategy.When taking the strategy of observable construction, one may additionally want to the use parameterized Ansatz quantum circuits. Hence, we discuss a simple hybrid strategy that combines both the
Figure 4: Ansatz expansion approach to post-variational algorithm construction. Starting from a variational Ansatz, multiple non-parameterized quantum circuits are constructed by Taylor expansion of the Ansatz around a suitably chosen initial setting of the parameters \(\theta_{0}\). The different circuits are linearly combined with classical coefficients that are optimized via convex optimization.
Figure 5: Observable construction approach to post-variational algorithm construction. A variational observable can be directly constructed by an ensemble of Pauli observables, and may serve as a potential approximation under locality restrictions.
Figure 3: Here we provide an overview of the various strategies introduced in the later text. Using the variational circuit (a) as a baseline, the Ansatz expansion approach (b) does model approximation by directly expanding the parameterized Ansatz into an ensemble of fixed Ansätze. On the other hand, the observable construction approach (c) foregoes all usage of an Ansatz and directly constructs a measurement observable by ensembling and taking the classical combination of various predefined trial observables. The hybrid approach (d) does both the expansion of the Ansatz, albeit only on the shallower layer of the model, and replacing the deeper layers of the model as well as the measurement observable with a series of measurement trial observables that can be used to retrieve a classical combination.
usage of Ansatz expansion and observable construction. See Figure 6 for a representation of such quantum circuits.
Recall the definition of the parameterized observable as \(\mathcal{O}(\theta)=U(\theta)OU^{\dagger}(\theta)\). Instead of expanding \(U(\theta)\) directly, we split the Ansatz \(U(\theta)\) into two unitaries (based on cutting the circuit at a certain depth, for example), such that \(U(\theta)=U_{A}(\theta_{A})U_{B}(\theta_{B})\). We can now write \(\mathcal{O}(\theta)=U_{A}(\theta_{A})U_{B}(\theta_{B})OU_{B}^{\dagger}( \theta_{B})U_{A}^{\dagger}(\theta_{A}).\) We denote \(\mathcal{O}^{\prime}(\theta_{B})=U_{B}(\theta_{B})OU_{B}^{\dagger}(\theta_{B})\). Using the observable construction approach applied to \(\mathcal{O}^{\prime}(\theta_{B})\) and Ansatz expansion applied to \(U_{A}(\theta_{A})\), we can lower the depth of the circuit to minimize complexity and barren plateaus, while keeping enough of the Ansatz to retain problem-related design advantages as well as the ability to approximate variational models that act globally on all qubits.
We note that barren plateaus occur in shallow Ansatze if the observable employed in variational algorithms is global [31]. Due to the evaluated results of the derivative of the Ansatze measured on global observables being small trivial values that vanish exponentially in relation to the total number of qubits \(n\), we can directly prune such measurements and only use local observables to approximate \(\mathcal{O}^{\prime}(\theta_{B})\). With the observables used restricted to local observables, we can once again employ the classical shadows method to reduce measurements.
## V Architecture design of post-variational neural networks
Let it be possible to approximate the target observable \(\mathcal{O}(\theta)\) obtained in a variational algorithm with a predefined collection of trial observables \(\mathcal{S}=\{\mathcal{O}_{1},\mathcal{O}_{2},\cdots\mathcal{O}_{m}\}\), such that \(\mathcal{E}_{\theta}(x)=\operatorname{tr}(\mathcal{O}(\theta)\rho(x))\approx \sum_{j:\mathcal{O}_{j}\in\mathcal{S}}\alpha_{j}\operatorname{tr}(\mathcal{O} _{j}\rho(x))\). Then the optimal values of the coefficients \(\alpha_{1},\alpha_{2},\cdots,\alpha_{m}\) are found with classical optimization by perceptrons or neural networks. Following the idea of neural networks, we call each individual quantum circuit as a quantum neuron [32].
**Definition V.1**.: _We define a \((p,q)\)-hybrid strategy as a strategy with a total of \(p\) Ansatze \(\{U_{a}\}_{a=1}^{p}\) and \(q\) observables \(\{O_{b}\}_{b=1}^{q}\), such that the output of each circuit for any input \(\rho\) is \(\operatorname{tr}(\mathcal{O}_{(a,b)}\rho)=\operatorname{tr}(U_{a}^{\dagger} O_{b}U_{a}\rho)\)._
As the outcome of quantum neuron measurements are probabilistic, multiple measurements need to be conducted in order to produce a good estimate. We refer to each quantum circuit that employs the fixed Ansatz and observable as a quantum neuron.
We define matrix \(Q\in\mathbb{R}^{d\times m}\) such that
\[Q_{ij}:=\operatorname{tr}(\mathcal{O}_{j}\rho(x_{i})), \tag{16}\]
where \(\{\mathcal{O}_{j}\}_{j=1}^{m}\) is the collection of observables produced by a \((p,q)\)-hybrid strategy and \(m=pq\). We use \(Q\) as the input to a classical linear regression model (Figure 7). We minimize the root-mean-square error as follows
\[\mathcal{L}_{\text{RMSE}} =\sqrt{\frac{1}{d}\sum_{i=1}^{d}\left(y_{i}-\sum_{a=1}^{p}\sum_{b =1}^{q}\alpha_{a,b}\operatorname{tr}(\mathcal{O}_{(a,b)}\rho(x_{i}))\right)^{2}} \tag{17}\] \[=\frac{1}{\sqrt{d}}\|Y-Q\alpha\|_{2}, \tag{18}\]
where \(Y=(y_{1},y_{2},\cdots,y_{d})^{\intercal}\). The closed form solution to solving linear regression problems obtains \(\alpha=Q^{+}Y\), or in the special case where \(Q\) has full column rank and thus \(Q^{\intercal}Q\) is positive definite, we obtain \(\alpha=(Q^{\intercal}Q)^{-1}Q^{\intercal}Y\).
Here, we use a simple linear regression model as the main classical model. However, this model can be extended to any classical model including feed-forward models; we only mention linear regression models in detail for easy analysis with closed-form solutions. Furthermore, the results can be extended to classification problems by adding an extra sigmoid or softmax function at the end of the output.
Figure 7: Classical linear regression approach of the hybrid model for a \((p,q)\)- hybrid strategy. Given a set of operators \(\mathcal{O}_{(i,j)}\), \(i\in[p]\) and \(j\in[q]\), and data embedding \(\rho(x)\), a simple linear quantum neural can be constructed by combining the outputs in a linear fashion using combination parameters \(\alpha_{ij}\). This obtains the output label \(\hat{y}\).
Figure 6: Hybrid approach to post-variational algorithm construction. This approach is a combination of the approaches shown in Figures 4 and 5.
Error analysis of post-variational neural networks
Given the design of post-variational neural networks based on a simple linear regression model as the classical ensemble model for quantum neurons. Measurement outcomes in quantum systems are probabilistic, and hence estimated quantities come with errors that may be carried and propagated throughout the entire neural network system. In this section, we discuss the effects of such errors as well as the number of measurements required to achieve a target final error.
_Estimation errors._ We first show the simple result for the total number of measurement for all terms \(\operatorname{tr}(\mathcal{O}_{(a,b)}\rho(x_{i}))\).
**Proposition VI.1** (Measurements needed for direct estimation of all quantum neurons).: _Consider a \((p,q)\)-hybrid strategy as in Definition V.1. Using the sample mean over multiple iterations as an estimator to evaluate the output of each of the \(m=pq\) quantum neurons, the number of quantum measurements required to estimate the output of all quantum neurons over all \(d\) data points such that the all \(m\times d\) outputs have an additive error of \(\epsilon_{H}\) with a probability of \(1-\delta\) falls within_
\[O\left(\frac{md}{\epsilon_{H}^{2}}\log\frac{md}{\delta}\right).\]
The proof is an application of Hoeffding [33] and union bound [34] and is shown in Appendix B. Alternatively, one can reduce the number of total measurements by estimating the output of all quantum neurons that contain the same fixed Ansatz over a single data point by utilizing the classical shadows protocol.
**Proposition VI.2** (Measurements needed for shadow estimation of all quantum neurons).: _Consider a \((p,q)\)-hybrid strategy as in Definition V.1. Using the classical shadows to estimate output of all \(m=pq\) quantum neuron over all \(d\) data points, the number of quantum measurements required such that the all \(m\times d\) outputs have an additive error of \(\epsilon_{H}\) with a probability of \(1-\delta\) falls within_
\[O\left(\frac{pd}{\epsilon_{H}^{2}}\max_{k\in[q]}\|O_{k}\|_{S}^{2}\log\frac{md }{\delta}\right).\]
The proof is via application of the bounds of the median-of-means estimator [19] and union bound [34] and is shown in Appendix B.
Relating back to the design principles, we note that the classical shadows protocol does not help with the Ansatz expansion approach as \(q=1\) and \(m=p\), and instead increases the complexity by \(\|O\|_{S}^{2}\). For the observable construction and hybrid approach, we now consider the case where the observables are the complete set of \(\mathcal{K}\)-local Paulis. The number of observables \(q\) is then
\[\sum_{\ell=0}^{\mathcal{K}}\binom{n}{\ell}3^{\ell}\in O(3^{\mathcal{K}}n^{ \mathcal{K}}). \tag{19}\]
As shown in [16], the shadow norm \(||O||_{S}^{2}\) is upper bounded by \(3^{\mathcal{K}}\). Hence, the total number of measurements needed for the classical shadows protocol is then
\[O\left(\frac{3^{\mathcal{K}}\mathcal{K}pd}{\epsilon_{H}^{2}}\log\frac{np}{ \delta}\right), \tag{20}\]
a reduction from direct measurements, which take
\[O\left(\frac{3^{\mathcal{K}}n^{\mathcal{K}}\mathcal{K}pd}{\epsilon_{H}^{2}}\log \frac{np}{\delta}\right), \tag{21}\]
where \(n\) is the number of number of qubits.
_Error propagation through neural networks._ Recall that we define \(Q\) such that \(Q_{ij}=\operatorname{tr}(\mathcal{O}_{j}\rho(x_{i}))\) from Equation 16. Given that each element in \(Q\) is the expected outcome of a quantum neuron, we construct \(\hat{Q}\) such that \(\hat{Q}_{ij}\) is the estimated outcome of the quantum neuron that falls within an additive error of \(\epsilon_{H}\) by multiple direct measurements in Proposition VI.1 or by utilizing classical shadow estimations in Proposition VI.2. Formally, \(\forall i,j\quad|\hat{Q}_{ij}-Q_{ij}|\leq\epsilon_{H}\). We now consider the effect of such errors on the loss function \(\mathcal{L}\) that we use for machine learning problems.
For linear regression, we consider the RMSE loss of \(\mathcal{L}_{\text{RMSE}}(\alpha,Q)\) from Equation 18. We then define the following optimal parameters \(\alpha^{*}\) and \(\hat{\alpha}^{*}\), for \(Q\) and \(\hat{Q}\), respectively, where
\[\alpha^{*} :=\arg\min_{\alpha}\mathcal{L}_{\text{RMSE}}(\alpha,Q), \tag{22}\] \[\hat{\alpha}^{*} :=\arg\min_{\hat{\alpha}}\mathcal{L}_{\text{RMSE}}(\hat{\alpha}, \hat{Q}). \tag{23}\]
Further, we define using only \(Q\),
\[\Delta\mathcal{L}_{\text{RMSE}}:=|\mathcal{L}_{\text{RMSE}}(\hat{\alpha}^{*},Q)-\mathcal{L}_{\text{RMSE}}(\alpha^{*},Q)|. \tag{24}\]
We then obtain the following proposition, the proof of which can be found in Appendix C.
**Theorem VI.3** (Linear regression theoretical guarantee).: _Consider a \((p,q)\)-hybrid strategy as in Definition V.1, and let \(m=pq\). Let \(Q\) be the matrix as defined in Equation 16. Let \(\epsilon>0\) and let \(\hat{Q}\) be such that_
\[\|\hat{Q}-Q\|_{\max}<\min\bigg{(}\frac{\min(\sigma_{\min}(Q), \sigma_{\min}(\hat{Q}))}{\sqrt{\min(m,d)md}},\\ \frac{\epsilon}{6\sqrt{m}\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}} \bigg{)}.\]
_Then, for the loss difference as defined in Equation 24, \(\Delta\mathcal{L}_{\text{RMSE}}<\epsilon\)._
To find the number of measurements needed to obtain a loss difference within \(\epsilon\), we first set the provided upper bounds to be within \(\epsilon_{H}\) as found in Propositions VI.1 and VI.2. Assuming a scenario when \(\|Q\|_{2}\|Q^{+}\|_{2}=\frac{\|Q\|_{2}}{\sigma_{\min}(Q)}=\kappa_{Q}\in O(1)\), \(\|Y\|_{2}\in O(\sqrt{d})\), \(\|Q\|_{2}\in\Omega(\sqrt{d})\),
we upper bound \(\epsilon_{H}^{-1}\) with respect to \(m\), \(d\) and \(\epsilon\) as follows:
\[\epsilon_{H} \geq\min\left(\frac{\min(\sigma_{\min}(Q),\sigma_{\min}(Q))}{\sqrt{ \min(m,d)md}},\frac{\epsilon}{6\sqrt{m\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}}}\right) \tag{25}\] \[\Rightarrow\frac{1}{\epsilon_{H}} \leq\max\left(\frac{\sqrt{\min(m,d)md}}{\min(\sigma_{\min}(Q), \sigma_{\min}(Q))},\frac{6\sqrt{m\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}}}{\epsilon }\right)\] (26) \[\Rightarrow\frac{1}{\epsilon_{H}} \in\max\left(O(m),O\left(\frac{\sqrt{m}}{\epsilon}\right)\right) \in O\left(\frac{m}{\epsilon}\right) \tag{27}\]
Combining the results in Propositions VI.1 and VI.2, if we want the final loss term of our hybrid quantum-classical model to have an error within \(\epsilon\), then the total number of measurements needed falls within the complexity of
\[t\in O\left(\frac{m^{3}d}{\epsilon^{2}}\log\frac{md}{\delta}\right) \tag{28}\]
for the probability of \(1-\delta\) using direct measurements, and
\[t\in O\left(\frac{m^{2}pd\max_{k\in[q]}\|O_{k}\|_{S}^{2}}{\epsilon^{2}}\log \frac{md}{\delta}\right) \tag{29}\]
using the classical shadows protocol.
Given that we have found the number of measurements needed for a \((p,q)\)-hybrid strategy for linear regression problems, we compare the number of measurements needed for different design principles for post-variational methods in Table 1.
Alternatively, we can discuss the effects of transforming the above problem into a classification task via logistic regression, which is done by simply adding a sigmoid function in between the weighted sum and the final output with \(y_{i}\in\{0,1\}\). We refer to Appendix C for this discussion.
## VII Implementation and empirical results
In this section, we discuss the implementation details of our post-variational quantum neural network in the setting of machine learning applications. We use the scikit-learn library [35] for classical preprocessing, pennylane[36] for quantum embedding generation, and pytorch[37] with pytorch-lightning[38] for training the tunable classical model.
For our experiments, we focus on tasks that require only a small number of qubits. To do so, we first compress our classical data using dimension reduction methods such as principal component analysis (PCA) [39] in order to encode the data into a quantum state. We focus our experimental results on image classification of image data as the dimension reduction of such data can be performed directly, either by utilizing classical dimension reduction techniques on data or leveraging image compression techniques such as max or average pooling.
For the tasks that utilize other types of data such as natural language processing, we expect our strategies to be applicable, but computationally-cheap classical preprocessing would have to be conducted to produce such results. For example, to generate sentence embeddings, one possible computationally-cheap method would be to take the average pooling of the GloVe embeddings [40] of each word, and then perform dimension reduction. Hence, for simplicity, we only discuss the applications of image classification in this paper.
Problem setup and implementation detailsTo demonstrate applicability on real-world data, we train our post-variational algorithm on the classification task based on the MNIST dataset [41], which consists of 60,000 images of size \(28\times 28\) of handwritten numbers from 0 to 9. To input the data into our quantum circuit, we follow Huang _et al._[42] and reduce the dimension of the data into a smaller vector using PCA. Refer to Algorithm 1 and Figure 8 for exact preprocessing details.
We then produce post-variational embeddings of the input data by following the high-level outline of Algorithm 2 for observable construction and
Figure 8: Preprocessing steps for images. The image data is first flattened from a 2D tensor to a vector. Dimension reduction is then perform on the image vector to produce an image embedding. Normalization is then performed for each feature in the embedding across all data. Lastly, each vector is normalized into a unit vector to be encoded into a quantum state via amplitude encoding.
Algorithm 3 for Ansatz expansion and hybrid strategies. If the classical shadows protocol is used, we replace the measurement step of retrieving \(\left\langle x_{i}\right|\mathcal{O}[k]\left|x_{i}\right\rangle\) in both algorithms with a classical shadows estimation generated from a set of unitaries \(\mathcal{U}\) that are either randomly chosen [43] or found by derandomized strategies [44, 45].
In particular, we set up the quantum circuit using Schuld _et al._[21]'s variational circuit as our template, embedding the compressed data via amplitude embedding [46] such that a state \(\left|\psi(x_{i})\right\rangle\) is the direct vector representation of the normalized input in order to maintain enough information from the PCA preprocessing:
\[\left|\psi(x_{i})\right\rangle=\frac{1}{\left\|x_{i}\right\|_{2}}\left(x_{i,1} \;\;x_{i,2}\;\;\;\cdots\;\;x_{i,\ell}\right)^{\intercal}. \tag{30}\]
For Ansatz expansion and hybrid approaches, we set the Ansatz to be the Pennylane's implementation [36] of the strongly entangling layer proposed in Schuld _et al._[21] (qml.StronglyEntanglingLayer), and initialize the parameters according to the strategy given by Grant _et al._[8], which sets the initial parameters such that such Ansatz layer evaluates to identity. This strategy ensures that the first-order gradient as well as higher-order gradients can be non-trivial, which can allow the Ansatz expansion technique to avoid barren plateaus. Furthermore, as the evaluation of the parameterized gates is the identity, we can ensure that the hybrid approach is at least at powerful as the observable construction approach as setting the Ansatz to identity reduces to the observable construction approach. We dub this Ansatz the "_Identity Origin Strongly Entangling Ansatz_".
Referring to Figure 9, we combine two layers of Schuld _et al._[21]'s Ansatz layer to form an Ansatz block that can evaluate to identity by setting initializing all parameters to \(0\). This Ansatz block can be further stacked alongside applying changes to the controlled gates' orientation and range to create a deeper and more complex model, but in our experiments, we use a single layer for simplicity and to maintain the shallowness of our circuits.
Experimental setupFor ease of training and benchmarking, we train our model on 2000 samples of the digits \(0\) and \(1\) in the MNIST dataset and test on 200 samples. For all experiments, the quantum embedding is the di
\begin{table}
\begin{tabular}{l c c} \hline \hline \(p\) Ansätze, \(q\) observables, \(n\) qubits, \(d\) data points & Direct measurement & Classical shadows \\ \hline Ansatz expansion (\(q=1\)) & \(\mathbf{O}\left(\frac{\mathbf{p}^{3}\mathbf{d}}{\mathbf{\epsilon}^{2}}\log\frac{\mathbf{pd}}{\bm {\delta}}\right)\) & \(O\left(\frac{p^{2}d\|O\|_{S}^{2}}{\epsilon^{2}}\log\frac{pd}{\delta}\right)\) \\ Observable construction (\(p=1\)) & \(O\left(\frac{q^{3}d}{\epsilon^{2}}\log\frac{qd}{\delta}\right)\) & \(O\left(\frac{q^{2}d\max_{k\in[q]}\|O\|_{S}^{2}}{\delta}\log\frac{qd}{\delta}\right)\) \\ Hybrid & \(O\left(\frac{m^{3}d}{\epsilon^{2}}\log\frac{qd}{\delta}\right)\) & \(O\left(\frac{m^{2}pd\max_{k\in[q]}\|O\|_{S}^{2}}{\epsilon^{2}}\log\frac{md}{ \delta}\right)\) \\ \(\mathcal{K}\)-local observable construction (\(q\in O(3^{\mathcal{K}}n^{\mathcal{K}})\)) & \(O\left(\frac{2\mathcal{T}^{\mathcal{K}}n^{3\mathcal{K}}\mathcal{K}pd}{\epsilon ^{2}}\log\frac{npd}{\delta}\right)\) & \(\mathbf{O}\left(\frac{2\mathcal{T}^{\mathcal{K}}n^{2\mathcal{K}}\mathcal{K}pd}{ \epsilon^{2}}\log\frac{npd}{\delta}\right)\) \\ \(\mathcal{K}\)-local Hybrid (\(q\in O(3^{\mathcal{K}}n^{\mathcal{K}})\)) & \(O\left(\frac{2\mathcal{T}^{\mathcal{K}}n^{3\mathcal{K}}\mathcal{K}pd}{ \epsilon^{2}}\log\frac{npd}{\delta}\right)\) & \(\mathbf{O}\left(\frac{2\mathcal{T}^{\mathcal{K}}n^{2\mathcal{K}}\mathcal{K}pd}{ \epsilon^{2}}\log\frac{npd}{\delta}\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Upper bounds for the number of measurements for the different design principles of post-variational methods. The better approach between direct measurement and classical shadows for different design principles is bolded. Given that the Ansatz expansion strategy does not employ multiple observables, the classical shadows method does not provide a speedup, and direct measurement should be used. For the observation construction and hybrid strategy, classical shadows provide a speedup only when the measurement observables are local.
rect amplitude embedding of the dimension-reduced image on 8 qubits. Following Schuld _et al._[21], we conduct training using \(k\)-fold cross validation [47] with \(k=5\). For the classical model, we use a simple linear classifier that trains for 150 epochs with an AdamW optimizer [48] with a learning rate of \(10^{-3}\). Model selection based on the highest validation F1 score is applied to prevent overfitting. For experimental results, the results refer to that which is trained on the expected value of outputs from the quantum encoders based on classical simulations by Pennylane, that is, we do not include the occurrence of measurement errors.
Experimental resultsWe first experiment on the effectiveness of the locality constraint on the observable construction strategy for constructing post-variational quantum circuits. (Table 2).We find that with just a locality of 3, the algorithm can achieve an accuracy of 96% on both validation and testing results, indicating that ensembling local Pauli observables is indeed a good heuristic for the observable construction strategy and may potentially be extended beyond the distribution restrictions in Huang _et al._[30]'s theoretical guarantees.
On the other hand, we find that the hybrid method produces better training results compared to the observable construction strategy. Recall the hybrid method uses is at least as powerful as the observable construction strategy with our Ansatz design. Taking only the first-order derivative into account and a locality restriction of 2 for both the hybrid and observable construction strategy while applying a simple computational measurement on the first qubit for the Ansatz expansion strategy, we note that the addition of the first-order derivative gives an 8% increase in accuracy for the measurement observable set with locality 2. (Table 3) Note that the Ansatz expansion approach is effectively the variational algorithm with only parameter tuning of one iteration and has roughly the same performance as random guessing. Comparing the training results on variational algorithms obtained by Schuld _et al._[21] as noted in Li _et al._[49], we see that even with a 2-local set of Ansatze and observables, the post-variational method can achieve similar performances to that of variational algorithms by use of our hybrid strategy. We expect such results to improve as we take higher-order derivatives, observables of higher locality, and more complex neural network models.
Finally, we mention the capacities of such models in the training of multiclass data, which is not directly achievable by variational algorithms as their outcomes are binary. Training on the full MNIST dataset on the same configurations, we find that we can achieve 82% accuracy on the observable construction approach with a locality of 3. (Table 4)
\begin{table}
\begin{tabular}{c c c c c c c} & \multicolumn{3}{c}{Validation} & \multicolumn{3}{c}{Testing} \\ \hline Locality & Loss & Accuracy & F1 & Loss & Accuracy & F1 \\ \hline
1 & 0.67 & 0.56 & 0.60 & 0.67 & 0.58 & 0.69 \\
2 & 0.40 & 0.83 & 0.82 & 0.42 & 0.82 & 0.82 \\
3 & 0.10 & 0.96 & 0.96 & 0.11 & 0.96 & 0.96 \\ \end{tabular}
\end{table}
Table 2: Effectiveness of locality constraints of the observable construction strategy. We note that empirical results show that taking an ensemble local Paulis do indeed serve as a valid approximation heuristic, with 3-local observables achieving 96% accuracy.
\begin{table}
\begin{tabular}{c c c c c c c c c c} Class & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Macro Avg \\ \hline F1 Score &.88 &.97 &.81 &.76 &.79 &.75 &.86 &.83 &.74 &.79 &.82 \\ \end{tabular}
\end{table}
Table 4: Training results of multiclass classification. Post-variational algorithms have the extended benefit of employing a single model to conduct multi-class classification task, where as variational algorithms are restricted to binary outputs.
\begin{table}
\begin{tabular}{c c c c c c c} & \multicolumn{3}{c}{Validation} & \multicolumn{3}{c}{Testing} \\ \hline Locality & Loss & Accuracy & F1 & Loss & Accuracy & F1 \\ \hline
1 & 0.67 & 0.56 & 0.60 & 0.67 & 0.58 & 0.69 \\
2 & 0.40 & 0.83 & 0.82 & 0.42 & 0.82 & 0.82 \\
3 & 0.10 & 0.96 & 0.96 & 0.11 & 0.96 & 0.96 \\ \end{tabular}
\end{table}
Table 3: Effectiveness of different design principles of post-variational neural networks. We note that even with a 2-local system, the hybrid strategy is already able to achieve results similar to variational algorithms.
Figure 9: The identity origin strongly entangling Ansatz. This ansatz block combines two layers of Schuld _et al._[21]’s strongly entangled Ansatz in order to be initialized to identity by setting all parameters to be 0 per Grant _et al._[8]’s strategy to avoid barren plateaus by initialization of parameters.
## VIII Conclusion
The large-scale realization of quantum computing is largely limited by the development of relevant hardware. With the requirement of millions of physical qubits to simulate thousands of logical qubits due to the need for quantum correction codes, a current strategy lies in hybrid computation with both classical and near-term quantum computers. Variational algorithms, one of the strategies for quantum computing in the NISQ era, are affected by the barren plateau problem, restricting the applicability of gradient-based optimizations with quantum neural networks.
This paper proposes "_post-variational strategies_" on hybrid quantum-classical devices that shift tunable parameters from quantum computers to classical computers, utilizing quantum computers to create multiple weak quantum encoders whose result can be sent into classical models to produce a class of hybrid quantum-classical algorithms, while still being rooted in the theoretical basis of the variational method for finding approximations in quantum mechanics.
We have shown that the post-variational strategies can be utilized with neural networks, and require duplicate measurements to worst case, approximately the square of the number of quantum feature maps used, to minimize the propagation of estimation errors to the classical neural network within an arbitrarily small error rate.
Furthermore, by utilizing a hybrid strategy of using both small-order expansions of parameterized Ansatze and a series of local Pauli measurements to construct quantum circuits, we can empirically show that the post-variational algorithms are indeed applicable to machine learning problems such as image classification with high accuracy.
###### Acknowledgements.
This work is modified from and was performed in the context of a Bachelor of Computing Dissertation at the National University of Singapore. The authors would like to thank Dr. Rahul Jain, Dr. Warut Suksompong, Beng Yee Gan, Naixu Guo, and Xiufan Li for their insightful comments and discussions in the review of this work. This research is supported by the National Research Foundation, Singapore, and A*STAR under its CQT Bridging Grant and its Quantum Engineering Programme under grant NRF2021-QEP2-02-P05.
## Appendix A Deconstruction of a variational Ansatz
Here we provide a simple argument to deconstruct a variational Ansatz and show that given enough terms, the post-variational algorithm can retrieve the optimum obtained by the variational Ansatz with a terminable guarantee.
As quantum circuits are specified by a sequential arrangement of quantum logic gates, the variational ansatz \(U(\theta)\) of a variational quantum circuit that is parameterized by \(\theta\in\mathbb{R}^{s}\) can be expressed as a product of unitaries such that
\[U(\theta)=U_{1}(\theta_{1})U_{2}(\theta_{2})\cdots U_{s}(\theta_{s}). \tag{10}\]
For the sake of simplicity, we discuss only one-parameter quantum logic gates for our analysis of the variational algorithm.
**Theorem A.1** (Stone's theorem on one-parameter unitary groups [50; 51]).: _A one-to-one correspondence between Hermitian operators \(H\) on a Hilbert space \(\mathcal{H}=\mathbb{C}^{2^{n}}\) and one-parameter families \((U_{t})_{t\in\mathbb{R}}\) of strongly continuous homomorphic unitaries can be shown as follows:_
\[U_{t}=e^{itH}.\]
Formally, the variational ansatz \(U(\theta)\) can be expanded using Stone's theorem as,
\[U(\theta)=W_{1}e^{i\theta_{1}H_{1}}V_{1}W_{2}e^{i\theta_{2}H_{2}}V_{2}\cdots W _{s}e^{i\theta_{s}H_{s}}V_{s}. \tag{11}\]
where \(W_{i}\) and \(V_{i}\) are fixed unitaries and \(H_{i}\) are Hermitian operators.
**Proposition A.2** (Baker-Campbell-Hausdorff identity [52]).: _Given two matrices \(X\) and \(Y\),_
\[e^{X}Ye^{-X}=\sum_{n=0}^{\infty}\frac{[(X)^{n},Y]}{n!},\]
_where \([(X)^{n},Y]\equiv\underbrace{[X,\cdots[X,[X,Y]]}_{n\text{ times}}\) and \([(X)^{0},Y]\equiv Y.\)_
We apply this identity recursively to rewrite the Hermitian operator \(U^{\dagger}(\theta)OU(\theta)\) in polynomial terms of \(\theta_{1},\theta_{2},\cdots\theta_{s}\). Expanding the first unitary \(U_{1}\), we get
\[U_{1}^{\dagger}(\theta_{1})OU_{1}(\theta_{1})=V_{1}^{\dagger}\left(\sum_{k=0}^{ \infty}\frac{[(i\theta_{1}H_{1})^{k},W_{1}^{\dagger}OW_{1}]}{k!}\right)V_{1}= \sum_{k=0}^{\infty}\frac{\theta_{1}^{k}}{k!}V_{1}^{\dagger}[(iH_{1})^{k},W_{1}^ {\dagger}OW_{1}]V_{1}. \tag{10}\]
We note that the term \(iH_{j}\) is anti-Hermitian, hence for all \(k\geq 0\), \(V_{1}^{\dagger}[(iH_{1})^{k},W_{1}^{\dagger}OW_{1}]V_{1}\) is Hermitian. Thus, we can write \(U_{1}^{\dagger}(\theta_{1})OU_{1}(\theta_{1})\) as a weighted polynomial sum of Hermitians against \(\theta_{1}\). Plugging the result recursively against the other terms, one can obtain
\[U^{\dagger}(\theta)OU(\theta)=\sum_{0\leq a_{1},a_{2},\cdots a_{ s}\leq\infty}\frac{\theta_{1}^{a_{1}}}{a_{1}!}\frac{\theta_{2}^{a_{2}}}{a_{2}!} \cdots\frac{\theta_{s}^{a_{s}}}{a_{s}!}\\ \underbrace{V_{s}^{\dagger}[(iH_{s})^{a_{s}},W_{s}^{\dagger}V_{s -1}^{\dagger}[(iH_{s-1})^{a_{s-1}},W_{s-1}^{\dagger}V_{s-2}^{\dagger}[\cdots,V _{1}^{\dagger}[(iH_{1})^{k},W_{1}^{\dagger}OW_{1}]V_{1}W_{2}]\cdots]V_{s-1}W_ {s}]V_{s}}_{\text{Hermitian}}. \tag{11}\]
This expansion allows one to express
\[\mathcal{O}(\theta)=U(\theta)OU^{\dagger}(\theta)\approx\sum_{i}\mathcal{F}_{i }(\theta)\mathcal{O}_{i}. \tag{12}\]
This discussion shows that a deconstruction of the variational algorithm does indeed exist formally and corresponds to a linear combination of Hermitians. However, we note that there are currently infinite terms in this expression. We now show that we can express this linear combination in limited terms. Any Hermitian operator can be expressed in a basis of Pauli matrices such that
\[H\in\left(\mathbb{C}^{2}\right)^{\otimes n}\to H\in\text{span}\left(\{ \mathcal{I},\mathcal{X},\mathcal{Y},\mathcal{Z}\}^{\otimes n}\right), \tag{13}\]
hence, we can state that
\[U^{\dagger}(\theta)OU(\theta)\in\text{span}\left(\{\mathcal{I},\mathcal{X}, \mathcal{Y},\mathcal{Z}\}^{\otimes n}\right), \tag{14}\]
where each coefficient is \(\text{poly}(\theta_{1},\theta_{2},\cdots\theta_{s})\).
Therefore, supposing that the variational algorithm obtains an optimal \(\theta^{*}\), the post-variational algorithm requires at most \(4^{n}\) terms to express the same optimal answer. Comparing the number of parameters tunable, we see that the variational algorithm requires \(O(\text{poly}(s))\) parameters, while the post-variational algorithm requires \(O(4^{n})\) parameters. We note that the quantum advantage of variational algorithms in terms of parameters is the ability to generate different observables on higher orders of \(\theta\), a feat that activation functions of classical computers cannot achieve, as classical activation functions can only non-linearize the \(\theta\) parameter itself without generating new observables. However, considering that the variational algorithm returns an estimation of the optimal answer, rather than the exact optimal answer, we hope to restrict the number of Hermitian terms used in the post-variational algorithm to \(O(\text{poly}(s))\) terms to achieve a similar estimation.
## Appendix B Derivation of random measurement errors
Proof of Proposition vi.1.: We represent the output of a single quantum neuron over a single data output over multiple runs as a series of i.i.d. random variables \(Z_{1},Z_{2},\cdots Z_{t}\), each with a range of \([-1,1]\). Then by Hoeffing bound [33], we find that the sample mean \(\bar{Z}\) and the expectation of the above random variables have \(\Pr(|\bar{Z}-E[Z]|\geq\epsilon)\leq 2\exp(-\frac{\epsilon\epsilon_{H}^{2}}{2})\). Further, as each run is associated with a single data point on a single quantum neuron, we need to apply a union bound [34] to ensure for all \(m\) quantum neurons, and \(d\) data points, we only have a small probability of \(|\bar{Z}-E[Z]|\geq\epsilon_{H}\). We denote the event \(E_{i,j}\) as the observation \(i\)th data point and \(j\)th quantum neuron having \(|\bar{Z}-E[Z]|\geq\epsilon_{H}\) for the corresponding observations. We then see that \(\Pr(\cup_{i=1}^{m}\cup_{j=1}^{d}E_{ij})\leq\sum_{i=1}^{m}\sum_{j=1}^{d}\Pr(E_ {ij})=md\cdot\Pr(|\bar{Z}-E[Z]|\geq\epsilon_{H})\leq 2md\exp(-\frac{\epsilon \epsilon_{H}^{2}}{2})\leq\delta\), such that \(t\in O(\frac{1}{\epsilon_{H}}\log\frac{md}{\delta})\). We note the number of quantum measurements are duplicated across \(m\) quantum neurons and \(d\) data points, hence the total number of observations required is \(O(\frac{md}{\epsilon_{H}^{2}}\log\frac{md}{\delta})\).
Proof of Proposition vi.2.: For simplicity, we consider the original classical shadows protocol [16] utilizing the median-of-means estimator [19] for discussion. With the median-of-means estimator, one partitions the number of measurements into \(s\) groups of \(t\) measurements, taking the median of the mean over the \(t\) measurements from the \(s\) groups. From the analysis of the median-of-means estimator in [16], we note that with probability \(1-\delta\), per Hoeffding bound [33] and Chebyshev's inequality, one should set \(t\in O(\frac{\max_{k\in[q]}\|O_{k}\|_{S}^{2}}{\epsilon^{2}})\) and per union bound, one should set \(s\in O(\log\frac{md}{\delta})\). As the classical shadows method predicts the outcome of \(q\) quantum neurons that share the same Ansatz and data point, the process has to be duplicated over \(d\) data points and \(p\) different Ansatze. Hence the total number of measurements needed in total is then \(O\left(\frac{pd\max_{k\in[q]}\|O_{k}\|_{S}^{2}}{\epsilon_{H}^{2}}\log\frac{md} {\delta}\right).\)
## Appendix C Derivation of error propagation
In this section, we detail the derivation of the error propagation of the element-wise error of the quantum neuron to the entire network. Before the proofs, we first set up some further notations needed./ Given a field \(\mathbb{F}\) of either real or complex numbers, for matrix \(A\in\mathbb{F}^{M\times N}\), we denote the Frobenius norm, or Hilbert-Schmidt norm, to be \(\|A\|_{F}=\sqrt{\sum_{i}\sum_{j}|A_{ij}|^{2}}=\sqrt{\sum_{i=1}^{\min(M,N)} \sigma_{i}^{2}(A)}\). We also denote the nuclear norm to be \(\|A\|_{*}=\sum_{i=1}^{\min(M,N)}\sigma_{i}(A)\). Recall that \(\sigma_{i}(A)\) is the \(i\)-th singular value of \(A\).
Before we prove Theorem VI.3, we first show some preliminaries to aid the proof of our results. For matrix \(A\in\mathbb{R}^{M\times N}\), we note the following inequalities for matrix norms
\[\|A\|_{\max} \leq\|A\|_{2}\leq\|A\|_{F}\leq\sqrt{MN}\|A\|_{\max}, \tag{10}\] \[\|A\|_{F} \leq\|A\|_{*}\leq\sqrt{r}\|A\|_{F}\leq r\|A\|_{2}, \tag{11}\]
where \(r=\operatorname{rank}(A)\leq\min(M,N)\).
**Proposition C.1** (Perturbation inequality for concave functions of singular values [53, 54]).: _Given matrices \(A,B\in\mathbb{R}^{M\times N}\). Suppose that \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is a concave function where \(f(0)=0\), then we have_
\[\sum_{i=1}^{\min(M,N)}|f(\sigma_{i}(A))-f(\sigma_{i}(B))|\leq\sum_{i=1}^{\min( M,N)}f(\sigma_{i}(A-B)).\]
We use this proposition to prove the follow lemma.
**Lemma C.2** (Perturbation inequality for rank differences).: _Given matrix \(A,B\in\mathbb{R}^{M\times N}\), if \(\|A-B\|_{\max}<\min(\sigma_{\min}(A),\sigma_{\min}(B))/\sqrt{\min(M,N)MN}\), where \(\sigma_{\min}(A)\) is the smallest non-zero singular value of \(A\), then \(\operatorname{rank}(A)=\operatorname{rank}(B)\)._
Proof.: We define concave function \(f_{\mu}:\mathbb{R}_{+}\rightarrow[0,1]\) such that
\[f_{\mu}(x):=\begin{cases}\frac{x}{\mu},&0\leq x<\mu,\\ 1,&\mu\leq x.\end{cases} \tag{12}\]
Suppose that \(\mu=\min(\sigma_{\min}(A),\sigma_{\min}(B))\), we find that
\[\sum_{i=1}^{\min(M,N)}|f_{\mu}(\sigma_{i}(A))-f(\sigma_{i}(B))|=|\operatorname {rank}(A)-\operatorname{rank}(B)|. \tag{13}\]
Then by Proposition C.1, we see that
\[|\operatorname{rank}(A)-\operatorname{rank}(B)|\leq\sum_{i=1}^{\min(M,N)}f_{ \mu}(\sigma_{i}(A-B))\leq\sum_{i=1}^{\min(M,N)}\frac{\sigma_{i}(A-B)}{\mu}= \frac{\|A-B\|_{*}}{\min(\sigma_{\min}(A),\sigma_{\min}(B))} \tag{14}\]
Using the matrix norm inequalities, we see that
\[|\operatorname{rank}(A)-\operatorname{rank}(B)|\leq\frac{\|A-B\|_{*}}{\min(\sigma_{ \min}(A),\sigma_{\min}(B))}\leq\frac{\sqrt{\min(M,N)}\|A-B\|_{F}}{\min(\sigma_{ \min}(A),\sigma_{\min}(B))}\leq\frac{\sqrt{\min(M,N)MN}\|A-B\|_{\max}}{\min( \sigma_{\min}(A),\sigma_{\min}(B))} \tag{100}\]
Setting an upper bound of \(1\) for the above inequality, we find that if \(\frac{\sqrt{\min(M,N)MN}\|A-B\|_{\max}}{\min(\sigma_{\min}(A),\sigma_{\min}(B) )}<1\), or
\[\|A-B\|_{\max}<\frac{\min(\sigma_{\min}(A),\sigma_{\min}(B))}{\sqrt{\min(M,N) MN}}, \tag{101}\]
then \(|\operatorname{rank}(A)-\operatorname{rank}(B)|<1\). Given that ranks have positive integers values, we find that \(\operatorname{rank}(A)=\operatorname{rank}(B)\).
We also use the following proposition:
**Proposition C.3** (Perturbation theory For pseudoinverses [55]).: _The spectral norm of the difference of the pseudoinverses of two matrices \(A\in\mathbb{R}^{m\times n}\) and \(B\in\mathbb{R}^{m\times n}\) can be upper bounded as follows if \(\operatorname{rank}(A)=\operatorname{rank}(B)\):_
\[\|B^{+}-A^{+}\|_{2}\leq 2\|A^{+}\|_{2}\|B^{+}\|_{2}\|B-A\|_{2}.\]
We continue with putting these technical results together to prove the Theorem VI.3 of the main text.
Proof of Theorem VI.3.: Recall that
\[\Delta\mathcal{L}_{\mathrm{RMSE}}=|\mathcal{L}_{\mathrm{RMSE}}(\hat{\alpha}^ {*},Q)-\mathcal{L}_{\mathrm{RMSE}}(\alpha^{*},Q)|. \tag{102}\]
Expanding directly, we obtain
\[\Delta\mathcal{L}_{\mathrm{RMSE}} =\frac{1}{\sqrt{d}}\|\|Y-Q\hat{\alpha}^{*}\|_{2}-\|Y-Q\alpha^{*}\| _{2}| \tag{103}\] \[(\text{reverse triangle}) \leq\frac{1}{\sqrt{d}}\|Q(\hat{\alpha}^{*}-\alpha^{*})\|_{2}\leq \frac{1}{\sqrt{d}}\|Q\|_{2}\|\hat{\alpha}^{*}-\alpha^{*}\|_{2}\] (104) \[=\frac{1}{\sqrt{d}}\|Q\|_{2}\|\hat{Q}^{+}Y-Q^{+}Y\|_{2}\leq\frac{ 1}{\sqrt{d}}\|Q\|_{2}\|Y\|_{2}\|\hat{Q}^{+}-Q^{+}\|_{2}. \tag{105}\]
With \(0<\epsilon_{0}<1\), we conduct enough measurements such that \(\|\hat{Q}-Q\|_{\max}<\epsilon_{0}\frac{\min(\sigma_{\min}(Q),\sigma_{\min}(Q) )}{\sqrt{\min(m,d)md}}\), hence by Lemma C.2, \(\operatorname{rank}(Q)=\operatorname{rank}(\hat{Q})\). Then by Proposition C.3, we obtain
\[\|\hat{Q}^{+}-Q^{+}\|_{2} \leq 2\|Q^{+}\|_{2}\|\hat{Q}^{+}\|_{2}\|\hat{Q}-Q\|_{2}\leq 2\|Q^{+} \|_{2}\left(\|Q^{+}\|_{2}+\|\hat{Q}^{+}-Q^{+}\|_{2}\right)\|\hat{Q}-Q\|_{2} \tag{106}\] \[\Rightarrow\|\hat{Q}^{+}-Q^{+}\|_{2} \leq\frac{2\|Q^{+}\|_{2}^{2}\|\hat{Q}-Q\|_{2}}{1-2\|Q^{+}\|_{2}\| \hat{Q}-Q\|_{2}}. \tag{107}\]
We note that if \(m,d\geq 9\), then
\[\|Q^{+}\|_{2}\|\hat{Q}-Q\|_{2}\leq\frac{\sqrt{md}}{\sigma_{\min}(Q)}\|\hat{Q} -Q\|_{\max}<\frac{\sqrt{md}}{\sigma_{\min}(Q)}\frac{\min(\sigma_{\min}(Q), \sigma_{\min}(\hat{Q}))}{\sqrt{\min(m,d)md}}\leq\frac{1}{\sqrt{\min(m,d)}} \leq\frac{1}{3}. \tag{108}\]
Hence we see that
\[\|\hat{Q}^{+}-Q^{+}\|_{2}\leq 6\|Q^{+}\|_{2}^{2}\|\hat{Q}-Q\|_{2}. \tag{109}\]
Evaluating the loss
\[\Delta\mathcal{L}_{\mathrm{RMSE}} <\frac{6}{\sqrt{d}}\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}\|\hat{Q}- Q\|_{2}\leq\frac{6\sqrt{md}}{\sqrt{d}}\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}\| \hat{Q}-Q\|_{\max} \tag{110}\] \[<6\|Y\|_{2}\|Q\|_{2}\|Q^{+}\|_{2}^{2}\frac{\min(\sigma_{\min}(Q), \sigma_{\min}(\hat{Q}))}{\sqrt{\min(m,d)d}}\epsilon_{0}. \tag{111}\]
Now setting (it has to satisfy the upper bound 1)
\[\epsilon_{0}\leq\min\left(1,\frac{\epsilon\sqrt{\min(m,d)d}}{6\|Y\|_{2}\|Q\|_{2} \|Q^{+}\|_{2}^{2}\min(\sigma_{\min}(Q),\sigma_{\min}(\hat{Q}))}\right), \tag{112}\]
then \(\Delta\mathcal{L}_{\text{RMSE}}\leq\epsilon\). We hence obtain for element-wise bound
\[\|\hat{Q}-Q\|_{\max}<\epsilon_{0}\frac{\min(\sigma_{\min}(Q),\sigma_{\min}(\hat{Q }))}{\sqrt{\min(m,d)md}}\leq\min\left(\frac{\min(\sigma_{\min}(Q),\sigma_{\min}( \hat{Q}))}{\sqrt{\min(m,d)md}},\frac{\epsilon}{6\sqrt{m}\|Y\|_{2}\|Q\|_{2}\|Q^ {+}\|_{2}^{2}}\right). \tag{119}\]
Lastly, we discuss the error analysis of logistic regression problems. Consider the following formulation of the logistic regression problem: For all data points in \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{d}\), the binary outcome \(Y_{i}\) for point \(x_{i}\) can be described as a Bernoulli-sampled data such that
\[Y_{i}|x_{i}\sim\text{Bernoulli}(p_{i}). \tag{120}\]
Here, \(p_{i}\) is the \(i\)-dependent parameter of the Bernoulli distribution. One can then see that:
\[\mathbb{E}[Y_{i}|x_{i}]=p_{i}. \tag{121}\]
Given the formulation of logistic regression as a generalized linear model, we note that \(\alpha\) is learned such that
\[\text{logit}(\mathbb{E}[Y_{i}|x_{i}]) =\frac{p_{i}}{1-p_{i}}=\alpha\cdot x_{i} \tag{122}\] \[\text{logit}(\mathbb{E}[Y_{i}|\hat{x}_{i}]) =\frac{\hat{p}_{i}}{1-\hat{p}_{i}}=\hat{\alpha}\cdot\hat{x}_{i} \tag{123}\]
Denoting \(L=\left(\frac{p_{1}}{1-p_{1}},\frac{p_{2}}{1-p_{2}},\cdots,\frac{p_{d}}{1-p_ {d}}\right)^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
We note that the first term is the RMSE loss for linear systems when \(Y=L\). We note that as \(p_{i}\) and \(\hat{p}_{i}\) are unknown and hence one cannot evaluate the term of \(\|\hat{L}-L\|_{2}\). We note that from Theorem VI.3, we obtain for element-wise bound
\[\|\hat{Q}-Q\|_{\max}<\min\left(\frac{\min(\sigma_{\min}(Q),\sigma_{\min}(\hat{Q} ))}{\sqrt{\min(m,d)md}},\frac{\epsilon}{6\sqrt{m}\|L\|_{2}\|Q\|_{2}\|Q^{+}\|_{2 }^{2}}-\frac{\|\hat{Q}^{+}\|_{2}\|\hat{L}-L\|_{2}}{6\sqrt{md}\|L\|_{2}\|Q^{+}\| _{2}^{2}}\right), \tag{10}\]
the loss difference of \(\Delta\mathcal{L}_{\mathrm{BCE}}\leq\epsilon\) is satisfied.
|
2305.10319 | Automatic Photo Orientation Detection with Convolutional Neural Networks | We apply convolutional neural networks (CNN) to the problem of image
orientation detection in the context of determining the correct orientation
(from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is
especially important for digitazing analog photographs. We substantially
improve on the published state of the art in terms of the performance on one of
the standard datasets, and test our system on a more difficult large dataset of
consumer photos. We use Guided Backpropagation to obtain insights into how our
CNN detects photo orientation, and to explain its mistakes. | Ujash Joshi, Michael Guerzhoy | 2023-05-17T16:00:49Z | http://arxiv.org/abs/2305.10319v2 | # Automatic Photo Orientation Detection with Convolutional Neural Networks
###### Abstract
We apply convolutional neural networks (CNN) to the problem of image orientation detection in the context of determining the correct orientation (from 0, 90, 180, and 270 degrees) of a consumer photo. The problem is especially important for digitazone analog photographs. We substantially improve on the published state of the art in terms of the performance on one of the standard datasets, and test our system on a more difficult large dataset of consumer photos. We use Guided Backpropagation to obtain insights into how our CNN detects photo orientation, and to explain its mistakes.
photo; image orientation; convolutional neural networks; guided backpropagation; visualizing convnets
## I Introduction
In this paper, we address the problem of detecting the correct orientation of a consumer photograph (i.e., \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\); see Figure 1) by learning a deep convolutional neural network (CNN). We experiment with standard datasets, on one of which our system performs substantially better than the published state of the art, and we experiment on a large dataset of consumer photos that we collected. We apply Guided Backpropagation [1][2] in order to visualize what our classifier is doing and to explain the mistakes it makes.
We detect the orientation of a photo by learning a classifier that classifies input images into four classes: \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\). Our classifier is a deep convolutional neural network whose architecture is a modification of VGG-16 [3], a commonplace architecture used for image classification. We train our classifier on large datasets of photos.
Automatic photo orientation detection can help with speeding up the digitization of analog photos. It is a well-studied problem [4]. To date, learning-based approaches to the problem [4][5][6] consisted of extracting low-level features used in image classification and retrieval such as Histograms of Gradients (HoG) [7] and Colour Moments [8], and sometimes high-level features such as face and object detector outputs [9], and then feeding them into a learned classifier. Such classifiers perform very well on some standard datasets of photos. Examples of such datasets include the Corel stock photo dataset [10], which consists of professional photos, and the SUN-497 database [11] where each photo is labelled as containing a particular scene. In recent years, convolutional neural networks have been used instead of classifying hand-engineered features in object recognition [12], image retrieval [13] and the estimation of image skew [14] (note that this is a distinct problem from the one addressed in this paper: we are interested in accurately classifying a photo into four possible orientation bins, while Fischer et al. attempt to estimate the skew angle, which could be any real number.) In this work, we do the same for the related problem of photo orientation detection. Cao et al. [15] describe another biologically-inspired approach to the estimation of image skew, using a shallow architecture.
Recent visualization techniques for CNNs [1][2][16] have mostly been used for visualizing the function of particular neurons in a deep neural network, but they also allow for exploring how and why CNNs classify images the way they do [17]. We use Guided Backpropagation in order to visualize how our network classifies and misclassifies the orientation of photos, and obtain insight into how it works.
The rest of the paper is organized as follows. We outline our modifications to the VGG-16 architecture to obtain a photo orientation classifier, and detail our training procedure. We then present our experimental results on the standard datasets for the task of photo orientation detection, and compare them to prior work, demonstrating that CNNs are able to detect the orientation more accurately than prior work. We describe our own dataset of consumer photos, and analyze our experimental results on that dataset. Finally, we visualize what our CNN is doing in order to obtain insights into how CNNs detect photo orientation. Our contribution consists of obtaining better than published-state-of-the-art results on the task of image orientation detection, and a demonstration of the use of Guided Backpropagation for analyzing the outputs of a deep neural network.
Figure 1: Correct outputs for different inputs. The possible outputs are \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), and \(270^{\circ}\).
## II Modifying the VGG-16 architecture to build a photo orientation classifier
A common technique for building a CNN classifier for a new domain is to adopt an architecture orginally designed for the ImageNet dataset [18], modify it, and apply it to the new domain. See e.g. [19] and [20]. We found that an architecture that is identical to VGG-16, except with \(4\) outputs corresponding to \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\) instead of \(1,000\) outputs corresponding to the \(1,000\) object classes in ImageNet, performed the best on our datasets.
### _Training the CNN_
We found that initializing the weights of our network to the weights of VGG-16 trained on ImageNet, and then training the network end-to-end resulted in the best validation performance. This indicates that we are doing some transfer learning: VGG-16 detects 1000 classes of objects, which would be useful for detecting orientation. Likely initializing our weights to those of VGG-16 makes our network converge to nearby values of the weights.
The set of photos is transformed by rotating all the photos in the original training set by \(0^{\circ}\), \(90^{\circ}\), \(180^{\circ}\), or \(270^{\circ}\). The VGG architecture requires that the input be of size \(224\times 224\times 3\). We resize the input image to fit inside a \(224\times 224\) square, and pad it as necessary with black pixels in order for the input to be \(224\times 224\).
The network is trained using Dropout with \(p=.7\).
## III Experimental results
### _Prior work_
Ciocca et al. [5] summarize the current state of the art in photo orientation detection on two standard datasets: the Corel stock photo dataset [10] and the SUN-397 database [11]. On the SUN-397 database, the best results were obtained by Ciocca et al. [5], with 92.4% accuracy. On the Corel dataset, the best results were obtained by Vailaya et al. [4], with 97.4% accuracy.
### _Dataset descriptions_
The Corel dataset consists fo approx. 10,000 images, separated into 80 concept groups such as autumn, aviation, bonsai, castle, and waterfall. The SUN database consists of about 108,000 images, separated into 397 categories. Our own dataset, collected from Flickr by downloading images corresponding to 26 tags, consists of about 250,000 images.
Some of the images in the Corel dataset have very low-resolution. They have been resized to be larger but to still fit into a \(224\times 224\) square. Some images in the Corel dataset are atypical of consumer photos. Sample images from the art_cybr category of the Corel dataset are shown in Fig. 2.
We split all datasets into training (64%), test (20%), and validation (16%) sets, and then transform each of the sets by adding in all the possible rotations of each photo.
### _Experimental results_
The accuracy of our classifiers on the test sets of the datasets under consideration are summarized in Table I. The results for the Corel dataset should be interpreted with caution because of the issues described in Section III-B. We have matched or exceeded the published state of the art on both standard datasets for the task.
### _Discussion_
Our results show that convolutional neural networks match or outperform the published state of the art in image orientation detection on both standard datasets. The Corel dataset appears to not be diverse enough: we suspect that we are overfitting on some of the categories - we are including photos from all categories in both the training and the test set. Our results on our own Flickr dataset indicate that the SUN dataset may not be fully representative of consumer photos. This would not be an issue on our Flickr dataset, since all of our categories are ubiquitous in consumer photos and there is a large degree of intra-category diversity.
## IV Understanding the CNN photo orientation detector using visualization
In this work, we have shown that a deep architecture is able to detect photo orientation better than any of the published results employing shallow architectures that use combinations of low and high level features. It is of interest to see how the deep architecture is able to classify the photos, both in order to understand how the deep architecture classifies the photos, and in order to explain its mistakes. We show how to use Guided Backpropagation [1] to better understand what our CNN is doing.
Visualizing CNNs involves visualizing the roles of individual neurons. To visualize the roles of an individual neuron, researchers found patches of real images that activate that neuron the most [2], used methods similar to gradient ascent in order to synthesize images that activate that neuron
Figure 2: Some images from the Corel dataset are not representative of consumer photos.
the most [16], or visualized the change in images that would increase the activity of the neuron the most [1][2]. These approaches can also be used in combination with each other. Recent work [17] employed Guided Backpropagation in the context of object recognition.
We are interested, for every image in the test set, in explaining why our CNN obtained the answer that it did. That means that, when the input is a specific image of interest, we want to visualize the output neuron of our CNN whose activity is the largest of all four output neurons.
### _Guided Backpropagation_
We use a variant of Guided Backpropagation to explain the activity of our output neurons. Guided Backpropagation computes a modified version of the gradient of a particular neuron with respect to the input. We display that modified gradient as a saliency map. We are interested in an explanation for the network's output. For that reason, if, for a specific image \(x\), the network's maximal output is the \(m\)-th unit \(p_{m}\), we produce a saliency map that is computed similarly to \(\partial p_{m}/\partial x\), but is clearer than the gradient.
If the absolute value of the gradient \(|\frac{\partial nneuron}{\partial x_{i}}|\) is large, that means that increasing (or decreasing) \(x_{i}\) would influence the neuron. However, there can be a number of mechanisms for that to happen: one possibility is that the pixel \(x_{i}\) currently activates a feature that, when activated, increases the activity of a higher level feature, which in turn activates an even higher-level feature, which in turn activates the neuron of interest. Another possibility is that the pixel \(x_{i}\) activates a feature that in turn turns off a higher-level feature, which in turn activates an even higher-level feature, which in turn activates the neuron of interest. We do not want to visualize \(x_{i}\) as influencing the output neuron in case \(|\frac{\partial nneuron}{\partial x_{i}}|\) is large for the second reason. That is because if \(x_{i}\)'s changing depresses some feature more, causing the final output to be higher, \(x_{i}\) provides evidence for the _absence_ of some feature in the image. Since numerous features are absent but only a few are present, it makes less sense to take into account evidence for the absence of features when visualizing the saliency map that indicates which pixels influence the output. Empirically, \(\frac{\partial nneuron}{\partial x}\) is very noisy [1].
Guided backpropagation is a way of visualizing which pixels provide evidence for the _presence_ of features in the input image that influence the output neuron. The pixels that are visualized never _depress_ features causing the neuron of interest to activate. Instead, they only activate features throughout the layers of the network. This leads to much clearer visualizations. For the network in Fig. 3, the pixel \(x_{i}\) will be prominent in the saliency map that corresponds to the output \(z_{m}\) only if there is a path between \(x_{i}\) and \(z_{m}\) such that all the hidden ReLU units along that path are activated and all the partial derivatives along that path (i.e., \(\partial h_{n}/\partial h_{j}\), and \(\partial h_{j}/\partial x_{i}\)) are positive.
(Note that in a network that only uses ReLU activation functions, we can speak of features that correspond to ReLU units being turned "activated" and "depressed," referring to neurons' outputs' being positive or zero respectively. With activation functions that can take positive or negative values, this would not be possible.)
The saliency map that visualizes what a neuron of interest \(p_{m}\) is doing for a specific image is computed using Guided Backpropagation as follows. Partial derivatives are computed as if a Backpropagation pass is being computed, except that negative partial derivatives are set to 0 before proceeding to the layer below each time. The result is a "modified version" of \(\partial p_{m}/\partial x\). The modified \(\partial p_{m}/\partial x\) is high for \(x_{i}\)'s that, if they are increased, increase the activations of already-active hidden neurons that correspond to features detected in the image that contribute to \(p_{m}\)'s being high.
The result is a saliency map where pixels that provide positive evidence for features that contribute to the output \(z_{m}\)'s being high are displayed.
Most of the pixels on the computed saliency map are generally black. There are two reasons for this. First, for most \(x_{i}\), \(\partial p_{m}/\partial x_{i}\) is very close to 0, since most pixels do not activate higher-level features. Second, since in the salient map produced using Guided Backpropagation only pixels that provide positive evidence for \(p_{m}\)'s being high all they way up the network are displayed, there are many more 0-valued pixels in the saliency map than in \(\partial p_{m}/\partial x\).
### _Explaining correct predictions by the CNN using Guided Backpropagation_
In this section, we provide several examples of explanations of how the CNN detected the correct orientation of photos that were generated using Guided Backpropagation. The explanations are generated by computing the Guided
Figure 3: A path in a network from \(x_{i}\) to \(p_{m}\) where all the units along the path are activated and the weights connecting them are all positive. \(x_{i}\) would be visualized on the saliency map when using Guided Backpropagation.
Backpropagation saliency map using the algorithm described in Section IV-A with respect to the output \(p_{i}\), where \(i\) is the correct orientation, and \(p_{i}\) was the largest output. The interpretations of the visualizations are necessarily speculative, but the visualizations are suggestive.
Light fixtures are usually reliable cues for orienting indoor photos. In Figure 4, we display an example of a correctly oriented indoor photo, together with a Guided Backpropagation visualization. Interestingly, it is the shape of the light fixture that seems to be the cue. Items that look like light fixture seem to sometimes mislead the classifier. For example, in Figure 5, it appears that a wine glass was "mistaken" by the classifier for a light fixture.
Objects commonly found in scenes can be useful for orienting a photo. For example, in Figure 7, it appears that the shapes of the birds were useful in correctly orienting the photos.
### _Explaining mistakes by the CNN using Guided Backpropagation_
In this section, we provide several examples of explanations of how the CNN detected the _incorrect_ orientation of a photo that were generated using Guided Backpropagation. One example (Figure 5) was already shown. It appears that the CNN detects numerous objects and uses object detections as cues for orientation detection. In the example in Figure 5, the CNN seems to have incorrectly identified a wineglass as a light fixture.
In Fig 8, another interesting mistake is made. It appears that the rooster is used as a cue, but the image is nevertheless oriented incorrectly by the classifier. From the visualization, it appears plausible that the network would "think" that the bird it detected is oriented upright in the incorrectly-rotated image.
Figure 4: A correctly-oriented photo. The Guided Backpropagation visualization indicates that the outline of the light fixture was a cue for correctly orienting the image.
Figure 5: The classifer output the photo is upright, but it should be rotated by \(180^{\circ}\). The Guided Backpropagation visualization indicates that the outline of the wine glass was useful for orienting the photo, suggesting that the wineglass was mistaken for a light fixture.
## V Conclusions and Future Work
In this paper, we demonstrated that deep convolutional neural networks (CNN) outperform shallow architectures for the task of image orientation detection. We used Guided Backpropagation in order to explain both the correct and incorrect outputs of our classifier. We have shown that the CNN uses object detections in order to perform image orientation detection. Further evidence of this is that initializing the weights of our CNN to be the same as those in the VGG-16 network trained on ImageNet, suggesting that transfer learning is useful for image orientation detection (since it is likely that the we converge on weights that are close to the weights of VGG-16 for the lower layers if we initialize our weights to be those of VGG-16).
We plan to systematically study the outputs of our Guided Backpropagation visualizations in order to obtain quantitative insights about the behaviour of the CNN.
|
2310.03879 | Non Commutative Convolutional Signal Models in Neural Networks:
Stability to Small Deformations | In this paper we discuss the results recently published in~[1] about
algebraic signal models (ASMs) based on non commutative algebras and their use
in convolutional neural networks. Relying on the general tools from algebraic
signal processing (ASP), we study the filtering and stability properties of non
commutative convolutional filters. We show how non commutative filters can be
stable to small perturbations on the space of operators. We also show that
although the spectral components of the Fourier representation in a non
commutative signal model are associated to spaces of dimension larger than one,
there is a trade-off between stability and selectivity similar to that observed
for commutative models. Our results have direct implications for group neural
networks, multigraph neural networks and quaternion neural networks, among
other non commutative architectures. We conclude by corroborating these results
through numerical experiments. | Alejandro Parada-Mayorga, Landon Butler, Alejandro Ribeiro | 2023-10-05T20:27:22Z | http://arxiv.org/abs/2310.03879v1 | # Non Commutative Convolutional Signal Models in Neural Networks: Stability to Small Deformations
###### Abstract
In this paper we discuss the results recently published in [1] about algebraic signal models (ASMs) based on non commutative algebras and their use in convolutional neural networks. Relying on the general tools from algebraic signal processing (ASP), we study the filtering and stability properties of non commutative convolutional filters. We show how non commutative filters can be stable to small perturbations on the space of operators. We also show that although the spectral components of the Fourier representation in a non commutative signal model are associated to spaces of dimension larger than one, there is a trade-off between stability and selectivity similar to that observed for commutative models. Our results have direct implications for group neural networks, multigraph neural networks and quaternion neural networks, among other non commutative architectures. We conclude by corroborating these results through numerical experiments.
Alejandro Parada-Mayorga\({}^{*}\), Landon Butler\({}^{\dagger}\), and Alejandro Ribeiro\({}^{*}\)+University of Pennsylvania+University of California, Berkeley+
Footnote †: This document is a conference version of [1] which was recently published in IEEE-TSP. The second author acknowledges support by the NSF Graduate Research Fellowship under Grant No. DGE-2146752.
Non commutative convolutional architectures, Algebraic Neural Networks (AlgNNs), algebraic signal processing (ASP), non commutative algebras, non commutative operators.
## 1 Introduction
Convolutional signal models have become ubiquitous tools in machine learning. Understanding their fundamental properties plays a central role in explaining convolutional neural networks' good performance and limitations in different domains. One such property is that of stability to deformations. Previous works have shown that the good performance of convolutional architectures can be explained in part by the fact that for the same discriminability, a convolutional network is more stable to deformations than the corresponding convolutional filters [2, 3, 4, 5, 6]. These results apply in commutative scenarios such as graph filters and graph neural networks [5], Euclidean filters and traditional convolutional neural networks [4], graphon filters and graphon neural networks [7, 8], and commutative group filters and their associated group neural networks [2, 3]. However, the question of stability has remained open for non commutative signal models and architectures such as multigraph neural networks [9, 10], Lie group neural networks [11, 12], quaternion neural networks [13], and quiver neural networks [14].
In this work we develop a formal description of non commutative signal models as a particular instantiation of an algebraic signal model with a non commutative algebra. We leverage this algebraic representation to study the stability of non commutative convolutional filters and their associated neural networks. We show that although the spectral representations of non commutative convolutional filters are described by spaces whose dimension is larger than one, there is still a trade-off between discriminability and stability, similar to the one observed in commutative scenarios. Additionally, we show that for the same level of discriminability, the neural networks are more stable than the corresponding non commutative convolutional filters, under the notion of deformation and stability used in [2, 3]. We also prove that the stability bounds for non commutative convolutional networks are a scaled version of the bounds derived for the filters. This implies that the networks inherit the stability properties from the filters as a consequence of the attributes of the pointwise nonlinearities and the pooling operators. We conclude with numerical experiments to validate our results.
## 2 Non Commutative Convolutional Signal Models
The general notion of a convolutional signal model in the algebraic sense was first introduced by Puschel in [15, 16, 17, 18] and is known as _algebraic signal processing (ASP)_. In this context, any convolutional signal model is given by a triplet \((\mathcal{A},\mathcal{M},\rho)\), where \(\mathcal{A}\) is a unital associative algebra, \(\mathcal{M}\) is a vector space and \(\rho\) is a homomorphism from \(\mathcal{A}\) to the space of endomorphisms of \(\mathcal{M}\), \(\text{End}(\mathcal{M})\). The elements in \(\mathcal{A}\) are the filters, signals are the elements of \(\mathcal{M}\) and \(\rho:\mathcal{A}\rightarrow\text{End}(\mathcal{M})\) instantiates the abstract filters in \(\mathcal{A}\) into concrete operators acting on \(\mathcal{M}\).
Most applications of the ASP paradigm are associated with scenarios where the filters are determined by commutative operators [7, 8, 19, 20, 21]. However, it is possible to rely on the same framework to introduce convolutional non commutative models choosing \(\mathcal{A}\) as a non commutative algebra [1]. Then, any non commutative convolutional model to be considered in this paper can be described by the triplet \((\mathcal{A},\mathcal{M},\rho)\) where \(\mathcal{A}\) is a non commutative algebra. If \(\mathbf{x}\in\mathcal{M}\) is a signal, we refer to the signal \(\mathbf{y}=\rho(a)\mathbf{x}\) as the filtered version of \(\mathbf{x}\) by means of the filter \(a\in\mathcal{A}\). More specifically, we say that \(\mathbf{y}\) is the result of performing a convolution between the filter \(a\in\mathcal{A}\) and the signal \(\mathbf{x}\in\mathcal{M}\).
To simplify the representation of the elements in \(\mathcal{A}\), we leverage the concept of a _generator set_ of an algebra. We say that the set \(\mathcal{G}\subset\mathcal{A}\) generates the algebra \(\mathcal{A}\) if any element of \(\mathcal{A}\) can be written as a polynomial of the elements in \(\mathcal{G}\). For instance, if \(\mathcal{G}=\{g_{1},g_{2}\}\), then any element \(a\in\mathcal{A}\) can be written as \(a=p(g_{1},g_{2})\), where \(p\) is a polynomial with two independent variables. If \(\mathcal{A}\) is non commutative, the elements of \(\mathcal{G}\) do not commute. The image of a generator, \(g\in\mathcal{G}\), under \(\rho\) is known as a _shift operator_, which we denote by \(\mathbf{S}=\rho(g)\). Since \(\rho\) is a homomorphism, it is a linear map that preserves the products in \(\mathcal{A}\). Therefore, the image of any filter in \(\mathcal{A}\) can be written as a polynomial of the shift operators. Additionally,
the coefficients of such polynomials are the same used for the filter in \(\mathcal{A}\). For instance, if \(\mathcal{G}=\{g_{1},g_{2}\}\) and \(a=p(g_{1},g_{2})\), we have \(\rho(a)=p(\mathbf{S}_{1},\mathbf{S}_{2})\) where \(\mathbf{S}_{i}=\rho(g_{i})\).
## 3 Fourier decompositions of convolutional non commutative filters
The notion of Fourier decomposition in ASP is rooted in the concept of _decomposition in terms of irreducible subrepresentations_[1, 2, 15]. To see this, let us recall a few basic facts about ASM using concepts from representation theory of algebras. First, we emphasize that for any ASM, \((\mathcal{A},\mathcal{M},\rho)\), the pair \((\mathcal{M},\rho)\) constitutes a representation of \(\mathcal{A}\) in the sense of [22, 23]. Then, a representation \((\mathcal{U},\rho)\) of \(\mathcal{A}\) is a _subrepresentation_ of \((\mathcal{M},\rho)\) if \(\mathcal{U}\subseteq\mathcal{M}\) and \(\rho(a)u\in\mathcal{U}\) for all \(u\in\mathcal{U}\) and \(a\in\mathcal{A}\)[1]. A subrepresentation of \((\mathcal{M},\rho)\) can be obtained when one finds a subspace of \(\mathcal{M}\) invariant under the action of any operator \(\rho(a)\). Using this concept, we introduce the essential concept of irreducible representation. We say that \((\mathcal{M},\rho)\), when \(\mathcal{M}\neq 0\), is _irreducible_ or simple if the only subrepresentations of \((\mathcal{M},\rho)\) are \((0,\rho)\) and \((\mathcal{M},\rho)\). Finally, we state the notion of decomposition of a representation as a sum of other representations. Let \((\mathcal{U}_{1},\rho_{1})\) and \((\mathcal{U}_{1},\rho_{2})\) be two representations of \(\mathcal{A}\). Then, we can obtain a new representation called the _direct sum representation_, given by \((\mathcal{U}_{1}\oplus\mathcal{U}_{1},\rho)\), where \(\rho(a)(u_{1}\oplus u_{2})\cong\rho_{1}(a)u_{1}\oplus\rho_{2}(a)u_{2}\).
With these concepts at hand, we present the definition of Fourier decomposition that we will use for the frequency characterization of convolutional non commutative filters.
**Definition 1** (Fourier Decomposition [1]).: _For an algebraic signal model \((\mathcal{A},\mathcal{M},\rho)\), we say that there is a spectral or Fourier decomposition if_
\[(\mathcal{M},\rho)\cong\bigoplus_{(\mathcal{U}_{i},\phi_{i})\in\text{Im}r( \mathcal{A})}(\mathcal{U}_{i},\phi_{i}), \tag{1}\]
_where the \((\mathcal{U}_{i},\phi_{i})\) are irreducible subrepresentations of \((\mathcal{M},\rho)\). Any signal \(\mathbf{x}\in\mathcal{M}\) can be therefore represented by the map \(\Delta\) given by \(\Delta:\mathcal{M}\rightarrow\bigoplus_{(\mathcal{U}_{i},\phi_{i})\in\text{Im}r (\mathcal{A})}\mathcal{U}_{i}\), with \(\mathbf{x}\mapsto\hat{\mathbf{x}}\), known as the Fourier decomposition of \(\mathbf{x}\). The projection of \(\hat{\mathbf{x}}\) on \(\mathcal{U}_{i}\) is the \(i\)-th Fourier component of \(\mathbf{x}\) and is represented by \(\hat{\mathbf{x}}(i)\)._
Although abstract, Definition 1 offers a consistent, general and rigorous description of the Fourier transform under any ASM, which is independent from the domain of the signals. Additionally, from Definition 1, there are three fundamental observations in terms of the practicality of the Fourier decomposition and how to find it. First, the Fourier decomposition emphasizes a decomposition of the signals in terms of spaces that are invariant under the action of any \(\rho(a)\). Second, such minimum invariant subspaces are given by the irreducible subrepresentations of \((\mathcal{M},\rho)\). Third, the restriction of \(\rho(a)\) to each invariant subspace \(\mathcal{U}_{i}\) is determined by a homomorphism \(\phi_{i}(a)\) acting from \(\mathcal{A}\) to \(\text{End}(\mathcal{U}_{i})\). Additionally, the \(\phi_{i}(a)\) determine the Fourier representation of \(a\) in the \(i\)-th frequency component.
Note that Definition 1 is used for commutative and non commutative models alike. However, its instantiation in both scenarios have substantial differences. While for a commutative model \(\text{dim}(\mathcal{U}_{i})=1\), in non commutative models it must be that \(\text{dim}(\mathcal{U}_{i})>1\) for at least one \(i\)-th. This implies that the spectral representation of non commutative filters are characterized by matrix polynomials. To see this, let us consider the scenario where \(\mathcal{A}\) is non commutative with generator set \(\mathcal{G}=\{g_{1},g_{2}\}\). Since for at least one of the \(\mathcal{U}_{i}\) we have \(\text{dim}(\mathcal{U}_{i})>1\), the space of endomorphisms \(\text{End}(\mathcal{U}_{i})\) is a space of matrices of size \(\text{dim}(\mathcal{U}_{i})\times\text{dim}(\mathcal{U}_{i})\). Then, the Fourier representation of a filter \(a=p(g_{1},g_{2})\) is given by \(\phi_{i}(a)=p(\phi_{i}(g_{1}),\phi_{i}(g_{2}))\) which is a matrix polynomial where the independent variables are the matrices \(\phi_{i}(g_{1})\) and \(\phi_{i}(g_{2})\). The projection of \(\mathbf{x}\) on \(\mathcal{U}_{i}\), \(\hat{\mathbf{x}}(i)\), is the \(i\)-th Fourier component of \(\mathbf{x}\).
As pointed out in [9, 10], when the dimension of \(\mathcal{M}\) is finite, the problem of finding the Fourier decomposition of an ASM translates into solving a joint block diagonalization problem involving the shift operators. To see this, consider that \(\mathcal{A}\) has an arbitrary number of generators, \(m\), and let \((\mathcal{A},\mathcal{M},\rho)\) be an ASM with shift operators \(\{\mathbf{S}_{i}\}_{i=1}^{m}\), which have a joint block diagonalization given by \(\mathbf{S}_{i}=\mathbf{U}\text{diag}\left(\mathbf{\Sigma}_{1}^{(i)},\ldots, \mathbf{\Sigma}_{\ell}^{(i)}\right)\mathbf{U}^{T}\), with \(\mathbf{\Sigma}_{\ell}^{(i)}\in\mathbb{R}^{p_{j}\times p_{j}}\), and \(\mathbf{U}\) is orthogonal. Then, if we choose \(d=\max_{j}\{p_{j}\}\) and \(\mathbf{\Lambda}_{i}\in\mathbb{R}^{d\times d}\), we have that the Fourier representation of a filter \(p(\mathbf{S}_{1},\ldots,\mathbf{S}_{m})=\rho\left(p(g_{1},\ldots,g_{m})\right)\) is given by the matrix polynomial
\[p\left(\mathbf{\Lambda}_{1},\ldots,\mathbf{\Lambda}_{m}\right):\left(\mathbb{R}^{d \times d}\right)^{m}\rightarrow\mathbb{R}^{d\times d}, \tag{2}\]
where \(\left(\mathbb{R}^{d\times d}\right)^{m}\) is the \(m\)-times cartesian product of \(\mathbb{R}^{d\times d}\). To clarify the concepts discussed above, we present the following example.
**Example 1**.: Let \((\mathcal{A},\mathbb{R}^{3},\rho)\) be a non commutative ASM where \(\mathcal{A}\) has generator set \(\mathcal{G}=\{g_{1},g_{2}\}\) and where \((\mathcal{M},\rho)=(\mathcal{U}_{1},\phi_{1})\oplus(\mathcal{U}_{2},\rho_{2})\) with \(\text{dim}(\mathcal{U}_{1})=1\) and \(\text{dim}(\mathcal{U}_{2})=2\). Then, the Fourier representation of the filter \(p(g_{1},g_{2})=g_{1}+5g_{1}g_{2}+g_{2}^{2}\) is given by the matrix polynomial \(p(\mathbf{\Lambda}_{1},\mathbf{\Lambda}_{2})=\mathbf{\Lambda}_{1}+5\mathbf{ \Lambda}_{1}\mathbf{\Lambda}_{2}+\mathbf{\Lambda}_{2}^{2}\) where \(\mathbf{\Lambda}_{i}\in M_{2\times 2}\).
where \(\sigma_{\ell}\) is the composition between the pointwise nonlinearity, \(\eta_{\ell}\), and the pooling operator, \(P_{\ell}\). \(\mathcal{P}_{\ell}\) indicates a subset of filters, and \(\mathcal{S}_{\ell}\) is the set of admissible shift operators. We use the symbol \(\Phi(\mathbf{x}_{\ell-1},\mathcal{P}_{\ell-1},\mathcal{S}_{\ell-1})\) to emphasize that the filters used in the processing of the signal \(\mathbf{x}_{\ell-1}\) belong to the subset \(\mathcal{P}_{\ell}\subset\mathcal{A}_{\ell}\). The map associated with the whole AlgNN is \(\Phi\left(\mathbf{x},\{\mathcal{P}_{\ell}\}_{\ell}^{L},\{\mathcal{S}_{\ell}\}_ {\ell}^{L}\right)\) acting on an input signal \(\mathbf{x}\). We will use \(\{(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\rho_{\ell};\sigma_{\ell})\}_{\ell=1}^ {L}\) to denote an AlgNN with \(L\) layers, emphasizing the convolutional model used in each layer, \((\mathcal{A}_{\ell},\mathcal{M}_{\ell},\rho_{\ell})\), and the map \(\sigma_{\ell}\).
## 5 Perturbations in Non Commutative Convolutional Models
The homomorphism, \(\rho\), in any ASM, \((\mathcal{A},\mathcal{M},\rho)\), instantiates or implements the filters in the algebra. Therefore, it is natural to consider perturbations as deformations on the homomorphism [1, 2]. In this context we say that \((\mathcal{A},\mathcal{M},\tilde{\rho})\) is a perturbed version of \((\mathcal{A},\mathcal{M},\rho)\), with the action of \(\tilde{\rho}\) given by
\[\tilde{\rho}(p(g_{1},\ldots,g_{m}))=p\big{(}\tilde{g}_{1},\ldots,\tilde{g}_{m }\big{)}=p\big{(}\tilde{\mathbf{S}}_{1},\ldots,\tilde{\mathbf{S}}_{m}\big{)}, \tag{4}\]
where \(\tilde{\mathbf{S}}_{i}=\mathbf{S}_{i}+\mathbf{T}(\mathbf{S}_{i})\) and \(\mathbf{T}(\mathbf{S}_{i})\) is a small diffeomorphism in the space of operators, such that the perturbation on the homomorphism acts as a perturbation on the shift operators. For our discussion, we consider perturbations of the form
\[\mathbf{T}(\mathbf{S}_{i})=\mathbf{T}_{0}+\mathbf{T}_{1}\mathbf{S}_{i}, \tag{5}\]
where the \(\mathbf{T}_{i}\) are fixed compact normal operators with \(\|\mathbf{T}_{i}\|\ll 1\) and \(\|\mathbf{T}_{i}\|_{F}\leq\delta\left\|\mathbf{T}_{i}\right\|,\) with \(\delta>0\).
When considering the perturbation of an AlgNN, we will use \(\{(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\tilde{\rho}_{\ell};\sigma_{\ell})\}_ {\ell=1}^{L}\) to denote the perturbed version of \(\{(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\rho_{\ell};\sigma_{\ell})\}_{\ell=1}^ {L}\). In this way, we emphasize that we consider perturbations for all convolutional operators in the network.
## 6 Stability Results
Taking into account the notion of perturbation/deformation introduced before, we now introduce the definition of stability considered under our analysis.
**Definition 2** ([1, 2]).: _Given filters \(p(\mathbf{S}_{1},\ldots,\mathbf{S}_{m})\) and \(p(\tilde{\mathbf{S}}_{1},\ldots,\tilde{\mathbf{S}}_{m})\) defined on the models \((\mathcal{A},\mathcal{M},\rho)\) and \((\mathcal{A},\mathcal{M},\tilde{\rho})\) respectively, we say that \(p\) is stable if there exist constants \(C_{0},C_{1}>0\) such that_
\[\Big{\|}p(\mathbf{S}_{1},\ldots,\mathbf{S}_{m})\mathbf{x}-p( \tilde{\mathbf{S}}_{1},\ldots,\tilde{\mathbf{S}}_{m})\mathbf{x}\Big{\|}\leq\] \[\left[C_{0}\sup_{\mathbf{S}_{i}\in\mathcal{S}}\|\mathbf{T}( \mathbf{S}_{i})\|+C_{1}\sup_{\mathbf{S}_{i}\in\mathcal{S}}\left\|D_{\mathbf{ T}}(\mathbf{S}_{i})\right\|+\mathcal{O}\left(\|\mathbf{T}(\mathbf{S}_{i})\|^{2} \right)\right]\|\mathbf{x}\|, \tag{6}\]
_for all \(\mathbf{x}\in\mathcal{M}\). In (6), \(D_{\mathbf{T}}(\mathbf{S})\) is the Frechet derivative of the perturbation operator \(\mathbf{T}\)._
To formally state the stability results, we need to first introduce definitions regarding subsets of filters in the algebra. To do so, we exploit the Fourier representation of the filters. We say that a filter \(p(g_{1},\ldots,g_{m})\in\mathcal{A}\) with Fourier representation \(p(x_{1},\ldots,x_{m})\) is \(L_{0}\)-Lipschitz if
\[\|p(x_{1},\ldots,x_{m})-p(\tilde{x}_{1},\ldots,\tilde{x}_{m})\|\\ \leq L_{0}\left\|(x_{1},\ldots,x_{m})-(\tilde{x}_{1},\ldots,\tilde{ x}_{m})\right\|, \tag{7}\]
for all \(x_{i}\), \(\tilde{x}_{i}\). Additionally, we say that \(p(g_{1},\ldots,g_{m})\) is \(L_{1}\)-integral Lipschitz if there exists \(L_{1}>0\) such that
\[\left\|\mathbf{D}_{p|x_{i}}(x_{1},\ldots,x_{m})\left\{(\cdot)\,x_{i}\right\} \right\|\leq L_{1}, \tag{8}\]
for all \(x_{i}\), where \(\mathbf{D}_{p|x_{i}}(x_{1},\ldots,x_{m})\) is the partial Frechet derivative of \(p(x_{1},\ldots,x_{m})\), and \(\|\cdot\|\) is the operator norm. We will denote by \(\mathcal{A}_{L_{0}}\) and \(\mathcal{A}_{L_{1}}\) the sets of \(L_{0}-\)Lipschitz and \(L_{1}\)-integral Lipschitz filters in \(\mathcal{A}\), respectively.
With these results at hand, we are ready to introduce our first stability result.
**Theorem 1** ([1]).: _Let \((\mathcal{A},\mathcal{M},\rho)\) be a non commutative ASM where \(\mathcal{A}\) has generators \(\{g_{i}\}_{i=1}^{m}\). Let \((\mathcal{A},\mathcal{M},\tilde{\rho})\) be a perturbed version of \((\mathcal{A},\mathcal{M},\rho)\) associated to the perturbation model in (5). If \(p\in\mathcal{A}_{L_{0}}\cap\mathcal{A}_{L_{1}}\subset\mathcal{A}\), the operator \(p(\mathbf{S}_{1},\ldots,\mathbf{S}_{m})\) is stable in the sense of Definition 2 with \(C_{0}=m\delta L_{0}\) and \(C_{1}=m\delta L_{1}\)._
Proof.: See [1].
Theorem 1 shows that one can find stable filters in non commutative convolutional models, and that stability is linked to the selectivity of the filters. More specifically, a reduction in the magnitude of \(L_{0}\) and \(L_{1}\) implies a reduction in the values of the stability constants, i.e. the filter is more stable. However, low values of \(L_{0}\) and \(L_{1}\) imply that the variability of the filter is reduced. In particular, a small value of \(L_{1}\) forces the filter to be flat for large frequencies. From these observations, we can see that although the non commutativity of the signal model substantially changes the nature of the Fourier representation of the filters (matrix functions instead of scalar functions), there is still a trade-off between selectivity and stability analog to that observed for commutative models.
Now, we state a stability result for the operator for an arbitrary layer of any non commutative algebraic neural network.
**Theorem 2** ([1]).: _Let \(\{(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\rho_{\ell};\sigma_{\ell})\}_{\ell=1}^ {L}\) be an algebraic neural network and \(\{(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\tilde{\rho}_{\ell};\sigma_{\ell})\}_ {\ell=1}^{L}\) its perturbed version by means of the perturbation model in (5). We consider one feature per layer and non commutative algebras \(\mathcal{A}_{\ell}\) with \(m\) generators. If \(\Phi\left(\mathbf{x}_{\ell-1},\mathcal{P}_{\ell},\mathcal{S}_{\ell}\right)\) and \(\Phi\left(\mathbf{x}_{\ell-1},\mathcal{P}_{\ell},\tilde{\mathcal{S}}_{\ell}\right)\) represent the \(\ell\)-th mapping operators of the AlgNN and its perturbed version, it follows that_
\[\Big{\|}\Phi\left(\mathbf{x}_{\ell-1},\mathcal{P}_{\ell},\mathcal{S}_{ \ell}\right)-\Phi\left(\mathbf{x}_{\ell-1},\mathcal{P}_{\ell},\tilde{ \mathcal{S}}_{\ell}\right)\Big{\|}\leq\] \[C_{\ell}\delta\|\mathbf{x}_{\ell-1}\|m\left(L_{0}^{(\ell)}\sup_{ \mathbf{S}_{i,\ell}}\|\mathbf{T}^{(\ell)}(\mathbf{S}_{i,\ell})\|\right.\] \[\left.+L_{1}^{(\ell)}\sup_{\mathbf{S}_{i,\ell}}\|D_{\mathbf{T}^{( \ell)}}(\mathbf{S}_{i,\ell})\|\right), \tag{9}\]
_where \(C_{\ell}\) is the Lipschitz constant of \(\sigma_{\ell}\), and \(\mathcal{P}_{\ell}=\mathcal{A}_{L_{0}}\cap\mathcal{A}_{L_{1}}\) represents the domain of \(\rho_{\ell}\). The index \(\ell\) makes reference to quantities and constants associated to the layer \(\ell\)._
Proof.: See [1].
From Theorem 2, we can observe that the map \(\sigma_{\ell}\), i.e. the composition between the point-wise nonlinearity and the pooling operator, scales the stability bound derived for the filters but does not affect its functional structure.
Now, we present the stability result for the operators that entirely characterizes any non commutative AlgNN.
**Theorem 3**.: _Let \(\left\{\left(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\rho_{\ell};\sigma_{\ell} \right)\right\}_{\ell=1}^{L}\) be an algebraic neural network and \(\left\{\left(\mathcal{A}_{\ell},\mathcal{M}_{\ell},\tilde{\rho}_{\ell};\sigma_{ \ell}\right)\right\}_{\ell=1}^{L}\) its perturbed version by means of the perturbation model in (5). We consider one feature per layer and non commutative algebras \(\mathcal{A}_{\ell}\) with \(m\) generators. If \(\Phi\left(\mathbf{x},\{\mathcal{P}_{\ell}\}_{1}^{L},\{\mathcal{S}_{\ell}\}_{1 }^{L}\right)\) and \(\Phi\left(\mathbf{x},\{\mathcal{P}_{\ell}\}_{1}^{L},\{\tilde{\mathcal{S}}_{ \ell}\}_{1}^{L}\right)\) represent the mapping operator and its perturbed version, it follows that_
\[\left\|\Phi\left(\mathbf{x},\{\mathcal{P}_{\ell}\}_{1}^{L},\{ \mathcal{S}_{\ell}\}_{1}^{L}\right)-\Phi\left(\mathbf{x},\{\mathcal{P}_{\ell} \}_{1}^{L},\{\tilde{\mathcal{S}}_{\ell}\}_{1}^{L}\right)\right\|\\ \leq\sum_{\ell=1}^{L}\mathbf{\Delta}_{\ell}\left(\prod_{r=\ell}^ {L}C_{r}\right)\left(\prod_{r=\ell+1}^{L}B_{r}\right)\left(\prod_{r=1}^{\ell-1 }C_{r}B_{r}\right)\left\|\mathbf{x}\right\|, \tag{10}\]
_where \(C_{\ell}\) is the Lipschitz constant of \(\sigma_{\ell}\), \(\|\rho_{\ell}(a)\|\leq B_{\ell}\) for all \(a\in\mathcal{P}_{\ell}\), and \(\mathcal{P}_{\ell}=\mathcal{A}_{L_{0}}\cap\mathcal{A}_{L_{1}}\) represents the domain of \(\rho_{\ell}\). The functions \(\mathbf{\Delta}_{\ell}\) are given by_
\[\mathbf{\Delta}_{\ell}=\delta m\left(L_{0}^{(\ell)}\sup_{\mathbf{S}_{i,\ell} }\|\mathbf{T}^{(\ell)}(\mathbf{S}_{i,\ell})\|\ +L_{1}^{(\ell)}\sup_{\mathbf{S}_{i,\ell}}\|D_{\mathbf{T}^{(\ell)}}( \mathbf{S}_{i,\ell})\|\right), \tag{11}\]
_with the index \(\ell\) indicating quantities and constants associated to the layer \(\ell\)._
Proof.: See [1].
One of the fundamental observations from Theorem 3 is that the AlgNN can be stable to perturbations, and that stability is inherited from the convolutional filters. This is expected since the maps \(\sigma_{\ell}\) only scale the stability bounds derived for the filters. However, it is important to remark that while the stability bounds derived for filters, layer operators and AlgNNs are essentially the same (with appropriate normalization of \(C_{\ell}\) and \(B_{\ell}\)[1]), the discriminability power of each of these operators is substantially different. This is one of the aspects that explains why convolutional networks perform better than filter banks [1, 2]. Due to the fact that the AlgNN is enriched with point-wise nonlinearities for the same level of stability, AlgNNs are more stable than the corresponding convolutional filters. Additionally, notice that the redistribution of information in the frequency domain performed by \(\sigma_{\ell}\) occurs by mapping information between spaces of dimension larger than one.
## 7 Numerical Experiments
In this section, we demonstrate the stability properties of the non commutative filters employed in multigraph neural networks [9, 10]. Under the multigraph signal model, which is associated with the regular algebra of polynomials with multiple independent variables, the existence of subclasses of filters that are Lipschitz and Integral Lipschitz are guaranteed (Theorem 1). Therefore, it is possible to learn filters that can mitigate the effect of deformations on the shift operators. To do so, we train an architecture to learn filters using a cost function that adds a penalty based on the filter's Integral Lipschitz constants, and showcase that this model is much more resilient to perturbations enforced on the graph.
We consider the application of multigraph learning for a rating prediction task on the Jester dataset [24]. For our multigraph model, we consider two classes of edges connecting the jokes: one representing sentence similarity and the other modeling rating similarity amongst users in the training set. The multigraph is sparsified by keeping only the 20 strongest edges of each class per node. We arbitrarily selected the first joke for evaluating the performance of our predictions.
For our experiment, we compare the performance of a learned linear multigraph filter (MFilter), a multigraph neural network (MGNN), and another which is regularized by an estimate of the filter's Integral Lipschitz constant (MGNN IL). All architectures are composed of a convolutional layer with 3 filter taps and a readout layer mapping to a rating estimate. We report the root mean squared error (RMSE) and the standard deviation after training for 40 epochs across 5 random splits of the dataset.
To simulate stability towards estimation error of the shift operators, we first train each architecture on 90% of the training set. At evaluation, we replace the rating similarity shift operator with an estimate generated by a random subset of the training set ranging from 10% to 90% of the overall dataset and compare the difference in RMSE between the trained and evaluated models. As can be seen in Figure 2, the penalized MultiGNN consistently maintains the smallest difference in error for each size of the test set, demonstrating the most stability towards estimation error.
## 8 Conclusions
We leveraged algebraic signal processing (ASP) to model general non commutative convolutional filters and showed that neural networks built upon these models can be stable to deformations. The results obtained apply to existing architectures such as multigraph-CNNs, quaternion-CNNs, quiver-CNNs and non commutative group-CNNs. We proved that non commutative convolutional neural networks can be stable, which is an attribute inherited from the stability of the filters. This is due to the fact that the inclusion of pointwise nonlinearities and pooling operators only change the stability bounds derived for the filters by a scalar factor. The differences between the Fourier representations of non commutative and commutative filters lead to stability conditions that are different for both scenarios. While in commutative signal models the stability conditions are imposed on scalar valued functions, in non commutative models such conditions are imposed on matrix polynomials. We demonstrated that the polynomial representations in non commutative models exhibit a similar trade-off between stability and selectivity.
Figure 2: Results of the stability experiment. |
2304.09575 | Approximate non-linear model predictive control with safety-augmented
neural networks | Model predictive control (MPC) achieves stability and constraint satisfaction
for general nonlinear systems, but requires computationally expensive online
optimization. This paper studies approximations of such MPC controllers via
neural networks (NNs) to achieve fast online evaluation. We propose safety
augmentation that yields deterministic guarantees for convergence and
constraint satisfaction despite approximation inaccuracies. We approximate the
entire input sequence of the MPC with NNs, which allows us to verify online if
it is a feasible solution to the MPC problem. We replace the NN solution by a
safe candidate based on standard MPC techniques whenever it is infeasible or
has worse cost. Our method requires a single evaluation of the NN and forward
integration of the input sequence online, which is fast to compute on
resource-constrained systems. The proposed control framework is illustrated
using two numerical non-linear MPC benchmarks of different complexity,
demonstrating computational speedups that are orders of magnitude higher than
online optimization. In the examples, we achieve deterministic safety through
the safety-augmented NNs, where a naive NN implementation fails. | Henrik Hose, Johannes Köhler, Melanie N. Zeilinger, Sebastian Trimpe | 2023-04-19T11:27:06Z | http://arxiv.org/abs/2304.09575v2 | # Approximate non-linear model predictive control
###### Abstract
Model predictive control (MPC) achieves stability and constraint satisfaction for general nonlinear systems, but requires computationally expensive online optimization. This paper studies approximations of such MPC controllers via neural networks (NNs) to achieve fast online evaluation. We propose safety augmentation that yields deterministic guarantees for convergence and constraint satisfaction despite approximation inaccuracies. We approximate the entire input sequence of the MPC with NNs, which allows us to verify online if it is a feasible solution to the MPC problem. We replace the NN solution by a safe candidate based on standard MPC techniques whenever it is infeasible or has worse cost. Our method requires a single evaluation of the NN and forward integration of the input sequence online, which is fast to compute on resource-constrained systems. The proposed control framework is illustrated on three non-linear MPC benchmarks of different complexity, demonstrating computational speedups orders of magnitudes higher than online optimization. In the examples, we achieve deterministic safety through the safety-augmented NNs, where naive NN implementation fails.
Non-linear model predictive control; approximate MPC; machine learning; constrained control
## 1 Introduction
Model predictive control (MPC) is an advanced control strategy for non-linear systems that provides theoretical guarantees for constraint satisfaction and stability [1]. Convenient software packages are available to implement the required constrained optimization problems and parse them for dedicated numerical solvers, e.g., [2]. However, practical applications remain limited by long computing times in resource-constrained systems and, for general non-linear systems, it is difficult to acquire a priori certification of computation time bounds. Neural networks (NNs) have been successfully trained to imitate (approximate) the behavior of MPC schemes [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], resulting in fast online sampling suitable for embedded applications. Yet, providing hard guarantees for constraint satisfaction and convergence for general, non-linear systems remains challenging for such approximate MPCs (cf. discussion in Sec. II).
### _Contribution_
We propose approximate MPC with safety augmented NNs (illustrated in Fig. 1), a novel algorithm that provides deterministic guarantees for feasibility and convergence. In particular, we approximate the optimal open-loop input sequence of MPC with a function approximator that is simple to evaluate (here, a NN). This allows us to check online whether the approximator provides a feasible and stabilizing solution to the MPC problem, and is thus safe. This approach requires a single forward simulation of the system dynamics, a small additional computational cost, but significantly faster than MPC with full online optimiziation. We use standard MPC techniques online to generate a safe candidate input sequence based on a previous safe input sequence with appended terminal control law. If the NN solution is unsafe or does not improve the cost compared to the candidate, the candidate is applied. This approach ensures constraint satisfaction (safety) and convergence without any assumptions on the approximation quality.
If the NN approximation is poor, the safety augmentation would often _not_ use the NN. In an additional result, we therefore show that the naive NN would be safe if a robust MPC is sufficiently well approximated. We use this in a constructive way by approximating robust MPC schemes.
Numerical examples on three non-linear MPC benchmark systems illustrate that our approach: (i) reduces the overall computational time by several orders of magnitude compared to online optimization; and (ii) provides deterministic safety. The examples also demonstrate the importance of the proposed safety augmentation, as naively (always) applying the NN solution leads to non-negligible constraint violations in all three examples.
### _Outline_
This article continues by presenting related work on approximate MPC with guarantees in Section 2. After formalizing the control problem in Section 3, we develop our safety augmentation method in Section 4, the main contribution of this paper. In Section 5, we present an additional result: a sufficient condition for the NN solution to be safe by approximating a robust MPC. We provide numerical examples with three non-linear benchmark systems in Section 6 and the article concludes in Section 7.
### _Notation_
The set of integers is denoted by \(\mathbb{I}\), the set of integers from \(0\) to \(N\) by \(\mathbb{I}_{0:N}\), and the set of non-negative integers by \(\mathbb{N}\). By \(K_{\infty}\), we
Fig. 1: Approximate MPC with safety-augmented NN. Input sequences by a NN controller are checked during the proposed safe online evaluation (see Alg. 1). If they yield constraint satisfaction and stability, they are applied to the system. Otherwise, a safe candidate obtained from the last time step – shifted with appended terminal controller – is applied as fallback.
denote the class of functions \(\alpha:\mathbb{R}_{\geq 0}\rightarrow\mathbb{R}_{\geq 0}\) which are continuous, strictly increasing, satisfy \(\alpha(0)=0\), and are unbound. For two sets, \(\mathcal{A}\) and \(\mathcal{B}\), \(\mathcal{A}\oplus\mathcal{B}\) is the Minkowski sum and \(\mathcal{A}\ominus\mathcal{B}\) is the Pontryagin difference. We denote the square diagonal matrix with elements \(v\in\mathbb{R}^{n}\) on the diagonal as \(\text{diag}(v)\in\mathbb{R}^{n\times n}\). \(I_{N}\) is the identity matrix of size \(N\).
## 2 Related work
Explicit solutions to MPC problems can be obtained for linear systems [21], but this is not straightforward for non-linear systems. While the idea of approximating non-linear MPCs to speed up online computation is not in itself new (cf. approaches using NNs [3, 4] or multi-parametric non-linear programming [22]), the resulting approximate MPC generally does not provide guarantees for constraint satisfaction (safety). In [23, 24], qualitative results for constraint satisfaction are derived, i.e., there exist a sufficiently small approximation error and set of initial conditions such that constraint satisfaction is achieved. However, with this result it is difficult to verify safety. In the following, we discuss existing methods to ensure constraint satisfaction with approximate MPC, including combination of robust MPC and probabilistic validation, warm starting of online optimization with the NN, additional safety filters, or constrained learning. Furthermore, we discuss classical results on suboptimal MPC, which have conceptual similarities to the proposed method although the problem setting is quite different.
**Robust with respect to approximation:** One way to account for inevitable approximation errors is to explicitly add robustness to the underlying MPC scheme. In [25], constraint satisfaction and stability are shown with Lipschitz-based constraint tightening (cf. also [26, 27]), but an admissible approximation error is typically not achievable in practice. In [5, 6], more advanced robustification (cf. [28, 29]) is used to handle realistic approximation errors by treating them as (hypothetical) input disturbance. To obtain bounds on the approximation error, statistical validation is proposed in [5] and applied in hardware experiments in [6]. The probabilistic guarantees are given by sampling independent identically distributed (i.i.d.) initial conditions, performing simulation in closed loop with the NN controller, and using Hoeffding's inequality to guarantee, with high probability, that the approximation error is smaller than the hypothetical disturbance. A similar statistical approach is used in [7] to validate safety. Due to the probabilistic nature of these approaches, a non-zero probability of faulty inputs remains, and safety is not deterministically guaranteed in closed loop. Furthermore, validating by sampling closed-loop trajectories is computationally expensive and the validation can be conservative. The method in [5, 6] also requires a robust MPC scheme which introduces conservatism, and the small, uniform bound on the approximation error can be hard to achieve for large systems in practice. By contrast, we provide deterministic guarantees without assumptions on the approximation error and our method is applicable to high dimensional systems.
**NN as warm start:** Using NN predictions as warm starting solutions to numerical solvers potentially reduces computation time, while the solver guarantees constraint satisfaction, meaning that standard MPC guarantees apply. A speedup by a factor of two is indeed observed for linear systems with active set solvers in [8] and for large-scale systems in [30]. However, this method requires online optimization, for which worst case computation time bounds can be long. Furthermore, in [9], warm starting with NN predictions was observed to increase the average computation time in non-linear numerical examples. Our approach differs from these methods in that we do not require online optimization and therefore achieve a considerably smaller computation time.
**Safety filters:** Safety enforcing methods aim to modify the outputs of the NN by finding the closest safe inputs. For linear systems, one can consider an _explicit_ safety filter, which simply use a projection based on control invariant sets. This is how the authors of [12, 13, 31] ensure NN controller safety. However, like computing control barrier functions, computing control invariant sets for general non-linear systems is challenging. This method is therefore limited to small dimensional or linear systems or is conservative (yields a small feasible set). For linear systems, it is also possible to explicitly encode polytopic constraints in the NN approximation to ensure constraint satisfaction [18]. However, this approach does not apply to non-linear systems. These issues can be addressed by "predictive" safety filters, which formulate an optimal control problem that minimizes the difference between a safe and an unsafe input [32]. An MPC formulation with a terminal set that is invariant under a terminal controller serves as a safety certificate that the filtered control inputs are safe. Similar to safety filters, [11] proposes approximating a complete sequence of inputs of a non-linear MPC with a NN and solving an optimization problem online that finds the closest safe trajectory. Unfortunately, solving online optimization problems can be computationally expensive and does not admit a worst-case computation time bound. While such predictive safety filters can also be approximated with NNs [33], safety guarantees again require uniform approximation error bounds, similar to [5, 6]. Like predictive safety filters, our method uses a sequence of inputs and terminal ingredients to guarantee safety. However, we do not require online optimization.
**Constrained learning:** Instead of simply fitting a function approximator to imitate the behavior of an MPC (e.g., by supervised learning), constrained learning methods aim to improve performance and constraint satisfaction. To enforce constraint satisfaction of the NN (at least on the states visited during training), optimization of the MPC and training can be performed simultaneously, e.g., by considering MPC cost and constraints in the training loss via slack variables as in [14, 15], or with a loss on the dual of the MPC problem as in [11]. The authors of [16] propose another method for jointly optimizing the parameters of the NN approximator and solving the optimal control problem for non-linear systems, both with off-the-shelf non-linear solvers. In [17], the optimization problem is reformulated as a recurrent NN which can be efficiently trained. All of these constrained learning methods aim for data efficiency and constraint satisfaction on the states visited during training. However, no guarantees are given for states not seen during training. Furthermore, the methods are only computationally tractable for a few training points and the system size is limited. In [10], constrained learning is used to approximate primal and dual MPC problems for linear systems by choosing the number of samples based on the scenario method. At evaluation time, primal feasibility (safety) is checked and suboptimality is quantified with the additional dual approximation. In case the approximate sequence is unsafe or "too" suboptimal, a separate, safe backup controller is invoked. Checking for primal feasibility online is also part of our method. However, results in [10] are presented for linear systems and extension to non-linear is not straightforward, e.g., for finding a backup controller. By contrast, our method is applicable to medium and large scale systems and provides deterministic guarantees without the need for a separate backup controller.
**Suboptimal MPC:** In non-linear MPC, it is not always necessary to solve optimization problems to optimality. As long as the optimized input sequence is feasible and reduces the value function compared to the candidate, the closed-loop system will be persistently feasible and stable, cf. a classic result [34]. In [35], stability for robust suboptimal MPC is directly enforced by a constraint on cost reduction
over the previous time step in the optimization problem. In [36], inherent robustness of a suboptimal MPC with warm starting is shown. Even though the setting of suboptimal MPC is quite different from the setting we consider here, we use a similar idea, i.e., we ensure persistent feasibility and convergence by providing a feasible, suboptimal control sequence from an NN that performs at least as well as the candidate sequence.
## 3 System Description
We consider general, non-linear, discrete time systems of the form
\[x(t+1)=f(x(t),u(t)) \tag{1}\]
with state \(x(t)\in\mathbb{R}^{n_{\text{x}}}\), input \(u(t)\in\mathbb{R}^{n_{\text{u}}}\), and \(t\in\mathbb{N}\). We assume \(f\) is continuous and consider compact, polytopic constraints of the form
\[\mathcal{X}=\{x\in\mathbb{R}^{n_{\text{x}}}\mid L_{\text{x}}x\leq 1\}, \quad\mathcal{U}=\{u\in\mathbb{R}^{n_{\text{u}}}\mid L_{\text{u}}u\leq 1\}. \tag{2}\]
The objective is to drive the system to \(x=0\) with \(f(0,0)=0\) and ensure constraint satisfaction \((x(t),u(t))\in\mathcal{X}\times\mathcal{U}\)\(\forall\)\(t\in\mathbb{N}\) for a large set of initial conditions. This objective can be achieved by MPC [1].
### MPC problem
In MPC, a sequence of inputs \(\textbf{u}\ =\ [u_{0}^{\top},\ \ldots,\ u_{N-1}^{\top}]^{\top}\in\mathcal{U}^{N}\) is computed. The predicted state is then \(\phi(k;x;\textbf{u})\) when applying (1) for \(k\) inputs from the sequence **u**, starting at \(x\). In MPC, we consider a finite-horizon cost of the form
\[V(x,\textbf{u})=\sum_{k=0}^{N-1}(\ell(\phi(k;x;\textbf{u}),\textbf{u}_{k}))+V _{\text{f}}(\phi(N;x;\textbf{u})), \tag{3}\]
with horizon \(N\), stage cost \(\ell(x,u)\), and terminal cost \(V_{\text{f}}(x)\). We assume \(V\) is continuous. We denote the set of feasible input trajectories from a given state \(x\) that are valid (suboptimal) solutions to the MPC optimization problem (5) by
\[\mathcal{U}^{N}(x)=\{\textbf{u}\in\mathcal{U}^{N}\mid \phi(k;x;\textbf{u})\in\mathcal{X},k\in\mathbb{I}_{0:N-1}, \tag{4}\] \[\phi(N;x;\textbf{u})\in\mathcal{X}_{\text{f}}\},\]
where \(\mathcal{X}_{\text{f}}\subseteq\mathcal{X}\) is the terminal set. The corresponding nominal MPC formulation is
\[V^{*}(x)=\min_{\textbf{u}\in\mathcal{U}^{N}(x)}V(x,\textbf{u}). \tag{5}\]
The feasible set of (5) is denoted by \(\mathcal{X}_{\text{feas}}\). Solving this optimization problem for the current value \(x\) gives a corresponding trajectory of optimal inputs **u**, which we denote by \(\Pi_{\text{MPC}}:\mathbb{R}^{n_{\text{x}}}\rightarrow\mathcal{U}^{N}\). Applying the first input of this sequence, \(\textbf{u}_{0}\), to the system yields a control policy of the form \(u=\kappa_{\text{MPC}}(x)\), which can be applied in closed loop to the system. We consider the following standard conditions for MPC (cf. [1]):
**Assumption 1**: _There exist a function \(\alpha_{\ell}\in\mathcal{K}_{\infty}\) and a feedback \(K_{\text{f}}:\mathcal{X}_{\text{f}}\rightarrow\mathcal{U}\) such that for any \(x\in\mathcal{X}\), \(u\in\mathcal{U}\):_
* \(\ell(x,u)\geq\alpha_{\ell}(\|x\|)\)_,_
* _the terminal set is positively invariant under the terminal controller_ \[x\in\mathcal{X}_{\text{f}}\implies f(x,K_{\text{f}}(x))\in\mathcal{X}_{\text {f}},\] (6)
* _the terminal cost is a control Lyapunov function for_ \(x\in\mathcal{X}_{\text{f}}\)__ \[V_{\text{f}}(f(x,K_{\text{f}}(x)))-V_{\text{f}}(x)\leq-\ell(x,u).\] (7)
These conditions ensure stability of the closed loop system and persistent feasibility, i.e., if a solution to (5) exists at \(t=0\), then it exists for all times \(t\in\mathbb{N}\). Corresponding proofs alongside constructive methods to determine \(\mathcal{X}_{\text{f}}\), \(K_{\text{f}}\), and \(V_{\text{f}}\) can be found in [1]. Implementing MPC requires online optimization to solve (5), which can be computationally expensive. We circumvent this issue by approximating the MPC with NNs.
### Problem formulation: approximate MPC with guarantees
We approximate the MPC via an NN offline by first generating a dataset of initial conditions and corresponding, optimal input sequences, i.e., solutions to (5), and then training the NN in a standard supervised learning fashion [4, 5, 6, 7, 8, 9]. The approximation \(\Pi_{\text{NN}}\approx\Pi_{\text{RMPC}}\) maps a state \(x\) to a sequence of inputs **u** with horizon length \(N\) as
\[\Pi_{\text{NN}}:\mathbb{R}^{n_{\text{x}}}\rightarrow\mathcal{U}^{N}. \tag{8}\]
The approximation is not exact, \(\Pi_{\text{NN}}\neq\Pi_{\text{MPC}}\). Therefore, naively (always) applying the first input \(\textbf{u}_{0}\) of \(\textbf{u}(t)=\Pi_{\text{NN}}(x(t))\) to the system does not guarantee constraint satisfaction; that is, \(x(t)\notin\mathcal{X}\) for some \(t\in\mathbb{N}\) when applying \(\textbf{u}(t)=\Pi_{\text{NN}}(x(t))\) and \(x(0)\in\mathcal{X}_{\text{feas}}\) (see Sec. 6 for examples where naively applying \(\Pi_{\text{NN}}\) fails).
_Problem statement:_ We seek to develop a safety augmentation such that for a given NN approximation, \(\Pi_{\text{NN}}\), of an MPC (5), the system is guaranteed to satisfy constraints and converge; that is, \(x(t)\in\mathcal{X}\) for all \(t\in\mathbb{N}\) and \(x(t)\to 0\) for \(t\rightarrow\infty\).
## 4 Safe Online Evaluation
In this section, we address the problem stated above by augmenting the NN in a safe online evaluation. We check online if \(\Pi_{\text{NN}}(x(t))\) is a feasible solution to (5) and provide a deterministically safe feedback policy that chooses between the NN sequence, if it is safe, and a safe fallback candidate. The safety augmentation further ensures that the closed-loop system converges. This idea is implemented in Algorithm 1, which we now elaborate.
```
1:\(\Pi_{\text{NN}}(\cdot)\), \(\mathcal{U}^{N}(\cdot)\), \(K_{\text{f}}(\cdot)\), \(V(\cdot,\cdot)\), \(\tilde{\textbf{u}}(0)=\textbf{u}_{\text{init}}\)
2:for all\(t\in\textbf{u}\)do
3:\(x\gets x(t)\) {measure state at time \(t\)}
4:\(\tilde{\textbf{u}}(t)\leftarrow\Pi_{\text{NN}}(x)\) {evaluate approximation}
5:if\(\tilde{\textbf{u}}(t)\in\mathcal{U}^{N}(x)\) {check if\(\tilde{\textbf{u}}(t)\) is feasible}then
6:\(\textbf{u}(t)\leftarrow\operatorname*{argmin}_{\textbf{u}(t)\in\{\tilde{ \textbf{u}}(t),\tilde{\textbf{u}}(t)\}}V(x,\textbf{u}(t))\) {choose \(\text{min cost}\)}
7:else
8:\(\textbf{u}(t)\leftarrow\tilde{\textbf{u}}(t)\) {keep candidate}
9:endif
10:\(u(t)\leftarrow\textbf{u}_{0}(t)\) {apply input to system}
11:\(\tilde{\textbf{u}}(t+1)\leftarrow\{\textbf{u}(t)_{1:N-1},K_{\text{f}}(\phi(N;x; \textbf{u}(t))\}\) {create candidate sequence}
12:endfor
```
**Algorithm 1** Approximate MPC with safety-augmented NN
Algorithm 1 validates predicted input sequences by an approximate MPC (8) and provides a safe fallback solution should the approximate solution be rejected. For each newly measured state \(x(t)\), the NN evaluation gives a predicted input sequence \(\hat{\textbf{u}}(t)\), which is checked for feasibility in line 4, i.e., \(\hat{\textbf{u}}(t)\in\mathcal{U}^{N}(x)\). This requires the system dynamics to be forward integrated along the prediction horizon (4). If \(\hat{\textbf{u}}(t)\) is safe, we select the sequence of minimum cost \(\textbf{u}(t)\) from \(\hat{\textbf{u}}(t)\), and the safe candidate \(\hat{\textbf{u}}(t)\) in line 5. Finally, the first input \(\textbf{u}_{0}(t)\) is applied to the system (1) and a new safe candidate is created by shifting \(\textbf{u}(t)\) and appending the terminal controller.
For initialization of Algorithm 1, we state the following assumption:
**Assumption 2**: _The system is initialized in state \(x(0)\) with a safe candidate solution \(\mathbf{u}_{\text{init}}\in\ \mathcal{U}^{N}(x(0))\) in Algorithm 1._
**Remark 1**: _Possible means of initialization could be: (i) start in a steady state, where \(\mathbf{u}_{\text{init}}\) is easy to construct; (ii) initialize with \(\mathbf{u}_{\text{init}}=\Pi_{\text{MPC}}(x(0))\), which can be computed offline if the system is started in a known state \(x(0)\); (iii) initialize with \(\mathbf{u}_{\text{init}}=\Pi_{\text{NN}}(x(0))\), which satisfies Assumption 2 if \(\Pi_{\text{NN}}(x(0))\in\mathcal{U}^{N}(x(0))\)._
The following theorem summarizes the closed-loop properties of Algorithm 1.
**Theorem 1**: _Let Assumptions 1 and 2 hold. Then, the closed-loop system resulting from Algorithm 1 satisfies \((x(t),u(t))\in\mathcal{X}\times\mathcal{U}\) for all \(t\in\mathbb{N}\) and converges to the state \(x=0\)._
In the following, we first prove constraint satisfaction and then convergence for all \(t\in\mathbb{N}\).
**Part I:** Using induction, we show that \(\tilde{\mathbf{u}}(t)\in\mathcal{U}^{N}(x(t))\) for all \(t\in\mathbb{N}\). Induction start: at \(t=0\) by Assumption 2. Induction step: we know that \(\mathbf{u}(t)\in\mathcal{U}^{N}(x(t))\), since \(\mathbf{u}(t)\) is selected only from feasible sequences in lines 4 to 8. Whenever \(\mathbf{u}(t)\in\mathcal{U}^{N}(x(t))\), Assumption 1 ensures that the safe candidate \(\tilde{\mathbf{u}}(t+1)\) constructed in line 10 is feasible, i.e., \(\tilde{\mathbf{u}}(t+1)\in\mathcal{U}^{N}(x(t+1))\) (cf. standard candidate solution in nominal MPC proof [1]). It follows by induction that \(\mathbf{u}\in\mathcal{U}^{N}(x(t))\ \forall\ t\in\mathbb{N}\) and therefore \((x(t),u(t))\in\mathcal{X}\times\mathcal{U}\).
**Part II:** For the shifted candidate sequence in Alg. 1 line 10 it holds that
\[\begin{split}& V(x(t+1),\tilde{\mathbf{u}}(t+1))\\ &\quad=\sum_{k=1}^{N-1}\ell(\phi(k,x(t),\mathbf{u}(t)),\mathbf{u}(t)\\ &\quad\quad+\ell(\phi(N;x(t);\mathbf{u}(t)),K_{\mathcal{I}}(\phi(N;x( t);\mathbf{u}(t)))\\ &\quad\quad+V_{\mathcal{I}}(\phi(N;x(t+1);\tilde{\mathbf{u}}(t+1))). \end{split} \tag{9}\]
By Assumption 1, it also holds that
\[\begin{split}& V(x(t+1),\tilde{\mathbf{u}}(t+1))\\ =& V(x(t),\mathbf{u}(t))-\ell(x(t),\mathbf{u}_{0}(t))\\ &\quad+\ell(\phi(N;x(t);\mathbf{u}(t)),K_{\mathcal{I}}(\phi(N;x(t); \mathbf{u}(t))))\\ &\quad-V_{\mathcal{I}}(\phi(N;x(t);\mathbf{u}(t))+V_{\mathcal{I}}( \phi(N;x(t+1);\tilde{\mathbf{u}}(t+1)))\\ &\quad\stackrel{{\text{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq
\(f(x,u)\|_{\infty}\) and \(\mathcal{B}_{\mathcal{E}}:=\{\tilde{d}_{\mathsf{w}}\in\mathbb{R}^{n_{\mathsf{x}}} \mid\|\tilde{d}_{\mathsf{w}}\|_{\infty}\leq\tilde{\epsilon}\}\). The tightened constraints are
\[\bar{\mathcal{U}}=\mathcal{U}\ominus\mathcal{B}_{\epsilon},\quad\bar{ \mathcal{X}}_{k}=\mathcal{X}\ominus c_{k}\mathcal{B}_{\mathcal{E}},\quad c_{k} =\sum_{i=0}^{k-1}L_{\mathsf{f}}^{i}, \tag{15}\]
where \(L_{\mathsf{f}}\) is the Lipschitz constant of \(f\) w.r.t. \(x\), i.e., \(\|f(x_{1},u)-f(x_{2},u)\|_{\infty}\leq L_{\mathsf{f}}\|x_{1}-x_{2}\|_{\infty}, \;\forall\;x_{1},x_{2}\in\mathcal{X},u\in\mathcal{U}\). The feasible set in (13) is
\[\begin{split}\bar{\mathcal{U}}^{N}(x)=\{\bar{\mathsf{u}}\in\bar{ \mathcal{U}}^{N}\mid&\phi(k;x;\bar{\mathsf{u}})\in\bar{ \mathcal{X}}_{k},k\in\mathbb{I}_{0:N-1},\\ &\phi(N;x;\bar{\mathsf{u}})\in\bar{\mathcal{X}}_{\mathsf{f}}\}, \end{split} \tag{16}\]
with an appropriate terminal set \(\bar{\mathcal{X}}_{\mathsf{f}}\subseteq\mathcal{X}_{\mathsf{f}}\), cf. [27, Thm. 1]. This Lipschitz-based method can be improved, e.g., using a more general feedback and tube parametrization [28], which we also use in the examples in Sec. VI. Many other robust MPC methods to satisfy (14) also exist, e.g., [29].
### Guarantees for approximations of robust MPC
In this section, we use a robust MPC (see Sec. V-A) to train an NN \(\Pi_{\text{NN}}\) using supervised learning as before. Then, we provide conditions for the safety augmentation (Alg. 1) to not reject \(\Pi_{\text{NN}}\). For an approximation with a bound on the approximation error, we can state the following Lemma.
**Lemma 2**: _Let \(\Pi_{\text{IMMC}}\) satisfy (14) and suppose the approximation error of the NN satisfies \(\|\Pi_{\text{NN}}(x)-\Pi_{\text{IMMC}}(x)\|_{\infty}\leq\epsilon\;\forall\;x \in\bar{\mathcal{X}}_{\text{feas}}\). Then, for all \(x\in\bar{\mathcal{X}}_{\text{feas}}\): \(\Pi_{\text{NN}}(x)\in\mathcal{U}^{N}(x)\)._
Let \(\boldsymbol{d}(x)=\Pi_{\text{NN}}(x)-\Pi_{\text{IMMPC}}(x)\), then \(\boldsymbol{d}(x)\in\mathcal{B}_{\epsilon}^{N}\forall x\in\bar{\mathcal{X}}_{ \text{feas}}\). As \(\Pi_{\text{IMMC}}\) satisfies (14), it follows that \(\Pi_{\text{IMMC}}(x)+\boldsymbol{d}(x)=\Pi_{\text{NN}}(x)\in\mathcal{U}^{N}(x)\), for which Algorithm 1 line 4 is always satisfied. \(\blacksquare\)
Lemma 2 allows us to design an approximate MPC that - under Algorithm 1 - will often be applied to the system and therefore mitigate potentially long periods of simply replaying the safe candidate. We emphasize that deterministic safety holds by Theorem 1 even if the conditions in Lemma 2 do not hold. The \(\epsilon\) approximation of \(\Pi_{\text{IMMC}}\) is similar to prior work [5, 6], where the \(\epsilon\) approximation is validated offline by performing simulations in closed loop with an NN controller, giving probabilistic safety guarantees directly. In our setting, we do not require the uniform \(\epsilon\) approximation error bound everywhere, because safety is already guaranteed by Algorithm 1, making it considerably simpler to find an approximator for large systems. In comparison to [5, 6], Lemma 2 serves as a sufficient condition, e.g., on a (subset of sampled states. We can use it in a constructive way to find "good" approximations \(\Pi_{\text{NN}}\) in our numerical examples.
**Remark 3**: _Lemma 2 does not guarantee that the proposed Algorithm 1 will always apply the NN solution, because the candidate in line 5 could have preferable cost. However, it is safe to always apply the NN prediction in line 5._
## VI Numerical examples
In this section, we demonstrate our method on three common numerical example systems for non-linear MPC: a continuous stir tank reactor [5, 37], a quadcopter model [28, 38], and a chain of hanging masses connected by springs [39]. First, the system dynamics and MPC formulations are summarized for all three systems. The NNs are trained using supervised learning by generating optimal input sequences of a robust MPC design (Sec. V) over a large set of random initial states. Finally, we compare our proposed approach, approximate MPC via safety-augmented NNs (Sec. IV), with online optimization and always (naively) applying the NN approximation. We benchmark the safety augmented NN with respect to computation time of online optimization and constraint satisfaction of the naive NN. The code for all examples is available at [https://github.com/hshose/soeampc.git](https://github.com/hshose/soeampc.git).
### Implementation details
We now describe some details of the MPC formulations (terminal ingredients and robustification), dataset generation, and the learning of approximations with NNs. For all three examples, we use a terminal set \(\mathcal{X}_{\mathsf{f}}=\{x\mid\|P_{\mathsf{f}}^{0.5}x\|\leq\alpha\}\), terminal cost \(V_{\mathsf{f}}=x^{\top}\mathbb{P}x\), and terminal controller \(u=K_{\mathsf{f}}x\), with parameters computed as in [40]. For simple robustification with respect to (hypothetical) input disturbances \(d_{\mathsf{w}}\), we use a tube defined by \(\dot{s}=-\rho s+\bar{w}\) and \(s(0)=0\), where \(\rho\) is a contraction rate and \(\bar{w}\) is a disturbance bound. The tightened constraints are of the form \(L_{\mathsf{x},j}x+c_{j}s\leq 1\), \(j\in\mathbb{I}_{n_{\text{com}}}\), where \(n_{\text{com}}\) is the number of constraints. The terminal set is tightened such that at time \(N\) the tube is inside the nominal terminal set. The tube parameters \(\rho\) and \(\bar{w}\), stabilizing feedback \(K_{\mathsf{f}}\), and constraint tightening factors \(c_{j}\) are computed following [28, 41]. The stabilizing feedback \(u=K_{\mathsf{f}}x+v\), where \(v\) is the optimization variable of the RMPC, is substituted directly into the MPC formulation. Additional details regarding the robust MPC design including specific linear matrix inequalities (LMIs) for the terminal ingredients and robustification parameters can be retrieved from the code.
In the next step, the offline dataset generation is performed using the acados toolbox [2] - a fast sequential quadratic programming (SQP) solver for non-linear programs (NLPs) - with HPIPM as quadratic program (QP) solver for the stir tank and quadcopter, and DAQP for the chain mass example. Maximum SQP iterations are set to \(400\) in the case of the quadcopter and \(1\times 10^{3}\) for the stir tank and chain mass system. We initialize the solver with a forward simulation of the clipped terminal controller. We only consider samples for NN training for which acados produced a feasible solution. A full list of solver settings is available in the code.
After the dataset generation, NN approximations are trained using Tensorflow with the Adam optimizer. We use fully connected feedforward NNs with tangent hyperbolic activations. The input layer has size \(n_{\mathsf{x}}\), while the output layer has \(n_{\mathsf{u}}\cdot N\) neurons with linear activations. Number and sizes of hidden layers were empirically determined by increasing the depth and width until a reasonable approximator was found. The additive disturbance \(d_{\mathsf{w}}\) is a design parameter in the robust MPC design that we use to facilitate learning a feasible solution (Sec. V). The \(d_{\mathsf{w}}\) in our examples was empirically determined as a tradeoff between simplification of the approximation problem and conservativeness, starting with small assumed disturbances and slowly increasing them until the NN gives a feasible solution to (5) for a reasonable percentage of testpoints.
### Benchmark systems
In the following, we provide details of the dynamics and parameters of the three benchmark systems.
#### Vi-B1 Stir tank reactor
The non-linear stir tank reactor with \(x\in\mathbb{R}^{2}\) and \(u\in\mathbb{R}\) is used as a benchmark in previous work on MPC [5, 37]. The continuous-time system dynamics are given by
\[\dot{x}=\begin{bmatrix}\frac{1-x_{1}}{\theta}-kx_{1}\exp\left(-\frac{M}{x_{2}} \right)\\ \frac{x_{\mathsf{f}}-x_{2}}{\theta}+kx_{1}\exp\left(-\frac{M}{x_{2}}\right)-\gamma u (x_{2}-x_{\mathsf{c}}))\end{bmatrix} \tag{17}\]
with parameters \(\theta=20\), \(k=300\), \(M=5\), \(x_{\mathsf{f}}=0.3947\), \(x_{\mathsf{c}}=0.3816\), and \(\gamma=0.117\). The goal is to drive the system to the unstable stationary point \(x_{\mathsf{c}}=[0.2632,0.6519]^{\top}\) and \(\mathrm{t}=0.7853\). The constraints are \(\|x-x_{\mathsf{c}}\|_{\infty}\leq 0.2\) and \(0\leq u\leq 2\). Initial states are
sampled from the constraint set. The sampling time is \(T_{\text{s}}=2\), and we use a horizon of \(N=10\) steps. For the MPC, we choose quadratic cost with weights \(Q=I_{2}\) and \(R=10^{-5}\). The terminal ingredients are computed as in [5]. We use \(|d_{w,2}|\leq 1\cdot 10^{-4}\) on state \(x_{2}\) and fit an NN with \(2\) hidden layers, \(50\) neurons per layer.
#### 3.2.2 Quadcopter
We use a ten-state quadcopter model with an RMPC formulation as in [28]. We use the original system dynamics and parameters from [38]. These dynamics are
\[\dot{x}_{i} =v_{i},\,i\in\mathbb{I}_{1:3} \dot{v}_{i} =g\tan{(\phi_{i})},\,i\in\mathbb{I}_{1:2} \tag{18a}\] \[\dot{v}_{3} =-g+\frac{k_{T}}{m}u_{3} \dot{\phi}_{i} =-d_{1}\phi_{i}+\omega_{i},\,i\in\mathbb{I}_{1:2}\] (18b) \[\dot{\omega}_{i} =-d_{0}\phi_{i}+n_{0}u_{i},\,i\in\mathbb{I}_{1:2}, \tag{18c}\]
with state \(x=[x_{1},x_{2},x_{3},v_{1},v_{2},v_{3},\phi_{1},\omega_{1},\phi_{2},\omega_{2} ]^{\top}\in\mathbb{R}^{10}\) and input \(u=[u_{1},u_{2},u_{3}]^{\top}\in\mathbb{R}^{3}\). The steady-state input is \(u_{\text{e},3}=\frac{g_{m}}{k_{0}}\). The constants are \(d_{0}=80\), \(d_{1}=8\), \(n_{0}=40\), \(k_{\text{P}}=0.91\), \(m=1\), \(3\), and \(g=9.81\). Constraints are \(x_{1}\leq 0.145\) (a wall), \(|\phi_{1,2}|\leq\frac{g}{3}\), \(|u_{1,2}|\leq\frac{g}{3}\), and \(u_{3}\in[0,2g]\). The quadratic state and input cost terms have weights \(Q=\text{diag}([200\cdot 1_{3},1_{3},0.01\cdot 1_{4}])\) and \(R=\text{diag}([8,8,0.8])\). Initial conditions are randomly sampled from \(|x_{1,2}|\leq 2.5\), \(x_{1}\leq 0.145\), \(|x_{3}|\leq 3\), \(|v_{1,2}|\leq 3\), \(|v_{3}|\leq 5\), \(|\phi_{1,2}|\leq\frac{g}{3}\), and \(|\omega_{1,2}|\leq 3\pi\). The sampling time is \(T_{\text{s}}=0.1\)s with a horizon \(N=10\). The terminal set and controller are computed by solving LMIs as described in detail in [28, 40]. We choose \(\|d_{w}\|_{\infty}\leq 5\cdot 10^{-3}\) and fit an NN with 6 hidden layers and \([200,400,600,600,400,200]\) neurons.
#### 3.2.3 Chain mass
The chain mass system is a standard benchmark problem for non-linear MPC introduced in [39]: a chain of \(M\) point masses in three dimensional space, connected by springs, where the first is fixed and the last can be freely moved by the input. The dynamics of the system are given by:
\[\dot{p}_{i} =v_{i},\,i\in\mathbb{I}_{2:M-1}\qquad\dot{p}_{M}=u \tag{19a}\] \[\dot{v}_{i} =\frac{1}{m}(F_{i,i+1}-F_{i-1,i})+g,\,i\in\mathbb{I}_{2:M-1}\] (19b) \[F_{i,i+1} =D(1-L/\left\|p_{i+1}-p_{i}\right\|)(p_{i+1}-p_{i}), \tag{19c}\]
with \(D=1\), \(L=0.033\), \(m=0.033\) and \(q=[0,0,-9.81]^{\top}\). The state is \(x=[p_{2}^{\top},\ldots,p_{M}^{\top},v_{2}^{\top},\ldots,v_{M-1}^{\top}]^{\top} \in\mathbb{R}^{5M-9}\) and the input \(u=v_{M}\ \in\ \mathbb{R}^{3}\). The objective is to move the masses into an equilibrium position, where the last free mass is at position \([1,0,0]\). There is a wall constraint for all masses such that \(p_{y,i}\geq-0.1\). The input is constrained to \(\|u\|_{\infty}\leq 1\). Initial states are sampled from \(\|p_{i}\|_{\infty}\leq 0.25\) and \(\|v_{i}\|_{\infty}\leq 0.5\). The sampling time is \(T_{\text{s}}=0.133\)s with a horizon of \(N=15\) for \(M=3\) masses. The quadratic state and input cost terms are \(Q=\text{diag}([1_{3(M-2)},25\cdot 1_{3},1_{3(M-2)}])\) and \(R=I_{3}\). We choose \(\|d_{w}\|_{\infty}\leq 5\cdot 10^{-3}\). Terminal set and constraint tightening is performed as for the previous examples. We fit a NN with 6 hidden layers and \([200,400,600,600,400,200]\) neurons.
### _Results_
In the following, we present the results for the three benchmark systems. First, we show the training effort and approximation quality achieved. Second, performance in terms of computation time is compared with online optimization. Finally, we show that our method is safe in all three examples, while naively (always) applying the NN leads to constraint violations in closed loop.
#### 3.3.1 Dataset generation and NN training
We now describe the size and computation times of generated datasets, give an indication of NN sizes, and provide NN training times for all three systems. To gain some intuition into how well each approximator generalizes, it is tested on \(10^{5}\) points not used during training by forward simulating and checking whether the NN provides a feasible solution to (5). The results are summarized in Table I.
In the stir tank example, a relatively small NN approximates the dataset well (more than \(99\,\mathrm{\char 37}\) of predictions are valid solutions to (5) on test data), which is in line with numerical results in [5]. For the ten-state quadcopter example, a dataset corresponding to approximately \(5\) gridpoints per state dimension suffices to fit a relatively large approximator, which yields a feasible solution to the MPC problem in over \(70\,\mathrm{\char 37}\) of initial conditions. Contributing factors to this could be that the quadcopter system is almost linear (only non-linearity is the \(\tan(\phi)\) in the velocity derivative), only loosely coupled in three Cartesian directions by the MPC cost function, and has few constraints that become active. The chain mass system, on the other hand, is "very" non-linear and chaotic, which makes the approximation problem inherently harder. This is reflected in a large dataset and approximator, which yield a feasible sequence for less than \(10\,\mathrm{\char 37}\) of initial conditions during testing.
The total time required to evaluate samples of the MPC on random initial conditions is the dominant time factor offline. However, this was executed in parallel. In our paper, we used the RWTH high-performance compute cluster with about \(100\) nodes in parallel and \(24\) cores per node to generate the datasets within a few hours. In our examples, NN training is performed on a single GPU. While it is fast for the stir tank and quadcopter, the chain mass example requires over 5 days of training, which is significant. However, this could be improved by finding better hyperparameters and distributing the training across multiple GPUs.
#### 3.3.2 Online computation times
We next evaluate the online computation times for the proposed method and compare them with solving the NLP online. The results of this comparison are summarized in Table II.
Computation times for our method include NN inference in Tensorflow and forward simulation of the system dynamics using acados implicit fourth order integration. Both of these have very low variance and we therefore only report the mean value. The combined online evaluation time is in the order of \(1\,\mathrm{ms}\). This is about three orders of magnitude faster than solving the NLP online with acados and about as fast as a single SQP iteration. We note that the high number of required iterations are partially due to the terminal set constraint required for persistent feasibility and the lack of warm starting. Yet, our method provides deterministic safety while a single SQP iteration might return an infeasible solution to (5). The NLP solver running to convergence or maximum iterations has a much higher variance
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{dataset} & \multicolumn{2}{c}{neural network} \\ \cline{2-5} & points & computation & parameters & training & solution \\ & \([10^{6}]\) & time1 & time2 & feasible \\ & & \([10^{3}\) core-h] & & [h] & [\%] \\ \hline \hline stir tank & 0.9 & 0.5 & \(3\times 10^{3}\) & 4 & 99 \\ quadcopter & 9.6 & 4.1 & \(1\times 10^{6}\) & 37 & 72 \\ chain mass & 19.1 & 17.5 & \(1\times 10^{6}\) & 127 & 9.5 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Dataset and NN approximator overview. Number of points and cumulative offline computation time for the dataset generation, trainable parameters and training time for the NN approximator. “Solutions feasible” is the percentage of test points, for which the NN gives a feasible solution to (5).
in computation time due to variable number of SQP and underlying QP iterations.
#### 3.2.3 Constraint satisfaction (safety)
To evaluate the effectiveness of our safety augmentation (Alg. 1), we compare the closed-loop system behavior with naively (always) applying the NN solution. We randomly sample initial states and use the first \(10^{5}\) points where the MPC has a feasible solution, i.e., initial conditions where we know that the (5) is initially feasible. We simulate the system dynamics and check constraint satisfaction in closed loop for a (sufficiently large) finite time. The results are summarized in Table 3.
With the proposed safety augmented NNs, the systems adhere to constraints in closed loop - they are _deterministically_ safe. By contrast, naively applying the NN solution results in constraint violations for all three systems, underscoring the importance of safety augmentation. Even when the NN provides an initially feasible solution to (5), i.e., \(\Pi_{\text{NN}}(x(0))\in\mathcal{U}^{N}(x(0))\), it might not be safe to naively evaluate for all times in closed loop. An illustrative example for catastrophic failure (crash into the wall) when naively applying the NN to the quadcopter system is depicted in Fig. 2: the NN controller is feasible for the first \(0.6\,\mathrm{s}\) of the closed loop experiment. Afterwards, the proposed safety augmentation detects some unsafe NN input sequences and applies the safe candidate solution. The naive implementation that always applies the new NN input crashes into the wall \(0.3\,\mathrm{s}\) later. The safety augmentation continues to apply NN predictions when safe and successfully evades the wall constraint.
Table 3 also summarizes how often our method rejects the NN prediction and chooses the safe candidate solution in closed loop. It further states the reasons for rejection, which could be state or terminal constraint violation (safety) in Algorithm 1 line 4, or failure to decrease the cost (convergence) in line 5. A trajectory could be rejected for multiple reasons. The results in Table 3 show that all three reasons for rejection might be relevant, depending on the specific system and constraints. For example, in the stir tank system, constraint violation is rarely the reason for applying the safe candidate; in the quadcopter system, state constraint violation is a major reason for rejecting NN predictions; and in the chain mass system, the terminal constraint is almost always the reason for applying the safe candidate.
The numerical examples given in this section demonstrate that the proposed method is fast to evaluate and provides deterministic guarantees for safety, while naively always applying the NN solution can be unsafe and performing online optimization is much slower.
## 4 Conclusion
We propose approximate non-linear MPC with safety augmented NNs, which provides deterministic safety and convergence in closed loop despite approximation errors. We approximate entire input sequences and check online whether an input sequence by
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{**safety augmented NN (ours)**} & \multicolumn{2}{c}{NLP solver} \\ \cline{2-7} & NN & forward & & & & \\ & inference & simulation & **total** & & & \\ & [ms] & [ms] & **[ms]** & & mean & max & iter \\ \hline \hline stir tank & 0.5 & 0.5 & 1.0 & 430 & 1060 & 0.7 \\ quadcopter & 0.7 & 0.6 & 1.3 & 1592 & 9561 & 2.4 \\ chain mass & 0.8 & 0.8 & 1.6 & 2209 & 3208 & 2.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Online computation times4, comparing the proposed method with online NLP solver acados [2]. Our method is three orders of magnitude faster than solving the MPC problem (5) and as fast as a single iteration of the solver (that does not guarantee constraint satisfaction).
\begin{table}
\begin{tabular}{l c c|c c c} \hline \hline & \multicolumn{2}{c}{**[\%] of safe**} & \multicolumn{2}{c}{reason for applying the safe} \\ & \multicolumn{2}{c}{**simulations**} & \multicolumn{2}{c}{candidate} & \multicolumn{2}{c}{candidate} & \multicolumn{2}{c}{terminal} & \multicolumn{1}{c}{cost} \\ & naive & **safe** & & & & \\ & (NN & **ours**) & & applied & & constr. & constr. & [\%] \\ & always) & & & [\%] & [\%] & [\%] \\ \hline \hline stir tank & 99.9 & **100** & 48.8 & 0.2 & 0.0 & **99.9** \\ quadcopter & 84.8 & **100** & 8.16 & **72.5** & 2.6 & 40.6 \\ chain mass & 98.9 & **100** & 98.9 & 0.1 & **99.8** & 41.6 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Safety augmented vs. naive NN in closed loop. Starting in random initial conditions from the feasible set of the MPC, the naive NN results in constraint violations, while our proposed method is always safe. The percentage of closed loop time steps at which safety augmentation applies the safe candidate sequence and associated reason are given in gray.
Figure 2: Exemplary closed-loop simulation of naive and safety augmented NN. Naively (always) applying \(\Pi_{\text{NN}}\) (dotted line) for \(1.1\,\mathrm{s}\), the quadcopter crashes (\(\times\)) into the wall at \(x_{1}=0.145\,\mathrm{m}\). With the safety-augmented NN presented in Algorithm 1 (solid), a safe candidate is chosen over unsafe \(\Pi_{\text{NN}}\) predictions for three time steps (\(\bullet\)) and the system is guaranteed to adhere to constraints.
the NN is feasible and improves cost compared to a safe candidate sequence. This candidate sequence is constructed from the shifted previous trajectory with appended terminal controller. This approach combines the benefits of online computations (simulating system dynamics to check for safety) and offline learning (avoiding online optimization). In three numerical examples, we illustrate how our method provides safety and improves overall computation time by three orders of magnitude compared to solving NLPs online with state of the art solvers. The proposed approach enables fast control on embedded platforms, which is relevant for robotics, for example. Future work will investigate such applications. Future improvements to the proposed method could consider additional disturbances and tailored training methods to improve sample efficiency.
|
2305.07671 | LatentPINNs: Generative physics-informed neural networks via a latent
representation learning | Physics-informed neural networks (PINNs) are promising to replace
conventional partial differential equation (PDE) solvers by offering more
accurate and flexible PDE solutions. However, they are hampered by the
relatively slow convergence and the need to perform additional, potentially
expensive, training for different PDE parameters. To solve this limitation, we
introduce latentPINN, a framework that utilizes latent representations of the
PDE parameters as additional (to the coordinates) inputs into PINNs and allows
for training over the distribution of these parameters. Motivated by the recent
progress on generative models, we promote the use of latent diffusion models to
learn compressed latent representations of the PDE parameters distribution and
act as input parameters to NN functional solutions. We use a two-stage training
scheme in which the first stage, we learn the latent representations for the
distribution of PDE parameters. In the second stage, we train a
physics-informed neural network over inputs given by randomly drawn samples
from the coordinate space within the solution domain and samples from the
learned latent representation of the PDE parameters. We test the approach on a
class of level set equations given by the nonlinear Eikonal equation. We
specifically share results corresponding to three different sets of Eikonal
parameters (velocity models). The proposed method performs well on new phase
velocity models without the need for any additional training. | Mohammad H. Taufik, Tariq Alkhalifah | 2023-05-11T16:54:17Z | http://arxiv.org/abs/2305.07671v1 | # LatentPINNs: Generative physics-informed neural networks via a latent representation learning
###### Abstract
Physics-informed neural networks (PINNs) are promising to replace conventional partial differential equation (PDE) solvers by offering more accurate and flexible PDE solutions. However, they are hampered by the relatively slow convergence and the need to perform additional, potentially expensive, training for different PDE parameters. To solve this limitation, we introduce latentPINN, a framework that utilizes latent representations of the PDE parameters as additional (to the coordinates) inputs into PINNs and allows for training over the distribution of these parameters. Motivated by the recent progress on generative models, we promote the use of latent diffusion models to learn compressed latent representations of the PDE parameters distribution and act as input parameters to NN functional solutions. We use a two-stage training scheme in which the first stage, we learn the latent representations for the distribution of PDE parameters. In the second stage, we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain and samples from the learned latent representation of the PDE parameters. We test the approach on a class of level set equations given by the nonlinear Eikonal equation. We specifically share results corresponding to three different sets of Eikonal parameters (velocity models). The proposed method performs well on new phase velocity models without the need for any additional training.
## 1 Introduction
Partial differential equations (PDEs) provide mathematical formulations that describe many physical systems. They have been the cornerstone of scientific and engineering applications for decades. In practice, these equations are often discretized and used to solve for the state field or the unknown physical parameters (e.g., velocity, pressure, heat, etc.). With complex boundaries and large computational domains, conventional PDE solvers developed over the years (e.g., finite-difference and finite element methods) suffer from general inefficiency and require complex discretization procedures. Neural network-based PDE solvers have shown considerable promise in addressing these limitations, as well as providing more efficient solutions. In this family of approaches, neural networks can be utilized to either learn the governing PDE operator (neural operators) (Li et al., 2020; Lu et al.,
2019; Li et al., 2020; Kovachki et al., 2021) or be regularized (by the governing PDE) to learn a specific PDE solution (neural instance solvers) (Raissi et al., 2019; Sirignano and Spiliopoulos, 2018). Neural operators often require the solution labels (obtained, for example, from conventional numerical solvers), in which the training process is then performed in a supervised manner. Thus, at inference time, neural operators can accommodate different realizations of the PDEs. This comes, however, with heavy computational costs for large-scale problems, as in this case, acquiring the labels themselves might require significant computing power. Several attempts have been made to relax the data-driven nature of neural operators by incorporating physical laws into the neural operators (Li et al., 2021); these additional constraints, however, still render the solution mesh dependent and generally inefficient to train. One notable example of neural instance solvers, termed physics-informed neural networks (PINNs), on the other hand, is driven by the governing PDE at hand (independent of numerical PDE solvers). At inference time, it provides the functional representation for a specific PDE parameter. These parametric deep neural networks (DNNs) address the limitations of numerical methods by facilitating flexible (mesh-independent) solutions that are generally more accurate than their numerical counterparts, thanks to the functional nature of the derivative evaluations.
In this quest to fully replace conventional PDE solvers, the current PINNs are facing two main challenges. Firstly, randomly initialized PINNs often require thousands of iterations (epochs) to converge, massively increasing the cost of attaining the PDE solution. To this end, several attempts have been made to address the training dynamics as well as improve its efficiency. Fang (2021) promoted the use of a local fitting approach to approximate the governing PDE operator and improve PINNs' efficiency. Wang et al. (2022) proposed the use of causality constraint that results in better convergence of PINNs from some initial condition. Huang and Alkhalifah (2022) promoted the use of higher-dimensional embedding space of the coordinate input to improve PINNs' convergence. Huang and Alkhalifah (2023a); Rzepecki and Doran (2022) used alternative embedding layers and, thus, demonstrated significant computational speed-ups. Moreover, a slow convergence can also be attributed to the challenging training dynamics. One major challenge comes from the need to appropriately balance the various loss terms during the PINNs training. Schiassi et al. (2021); Huang and Alkhalifah (2023b); Taufik et al. (2023a) promote the use of hard-constrained data loss function to alleviate the burden of this balancing procedure. Secondly, PINNs are trained for specific PDE parameters; a different parameter set requires additional PINN training. To accommodate for different PDE parameters efficiently, several authors suggest the use of transfer learning (Goswami et al., 2020; bin Waheed et al., 2021) and meta learning Penwarden et al. (2021); Liu et al. (2022); Huang et al. (2022). For transfer learning to reduce the cost of training effectively, the difference between the new PDE parameters and the original ones in which the PINN was trained needs to be small. Such a restrictive condition might end up requiring almost identical training costs to that from randomly initialized PINNs. Meta learning, on the other hand, comes with the objective of finding the best starting point to accelerate the new PDE parameter PINNs training. Both of these approaches, unlike neural operators, still require additional training for new PDE parameters.
In this paper, we introduce the use of latent representation learning to PINNs. Informed with this additional latent representation, the trained PINNs can be regarded as generative models that produce solutions to PDEs as a function of latent variables (representing the PDE parameters). We investigate the possibility of overcoming additional training for new PDE parameters using a pre-training and fine-tuning scheme. In the experiments, we test our framework on a nonlinear PDE with various datasets and share the challenges we faced.
The main contributions of this paper can be summarized as follows:
* We introduce the use of latent representation learning to physics-informed neural networks.
* We promote the use of autoencoder-based models to learn the latent representation of PDE parameters that can be combined with state-of-the-art diffusion models for functional data representation.
* We test the proposed algorithm on three datasets to solve a nonlinear PDE without the need to perform additional training for new PDE parameters.
The organization of the paper is as follows. We first discuss the related works in the next section. We then provide the theoretical background and the proposed framework in Section 3, followed by numerical experiments in Section 4. Finally, we discuss the current challenges and future directions, as well as conclude in the last two sections.
Related Work
### Generalizing PINNs
There have been many attempts to pre-train PINNs so that they converge faster for new PDE parameters. Transfer learning was suggested in which the PINNs are initialized with the weights of the training for a previous set of PDE parameters (bin Waheed et al., 2021). In transfer learning, the change in the PDE parameters needs to be small to improve convergence compared to starting with random initialization. Alternatively, meta learning was adopted in PINN training by finding initial weights that can work for a range of tasks or PDE parameters Penwarden et al. (2021); Liu et al. (2022); Huang et al. (2022). Meta learning is, however, costly considering the pretraining on a range of parameters. In all of these methods, additional training is required to fine-tune the PINN to provide accurate solutions for a new set of PDE parameters. In other words, these methods do not necessarily reduce the requirement for an extra PINNs training step for different PDE parameters.
### Generative models with PINNs
Uncertainty estimation has been the primary goal of incorporating generative models into physics-informed neural networks (Yang and Perdikaris, 2018; Zhu et al., 2019; Yang et al., 2020; Linka et al., 2022; Oszkinat et al., 2022). For such purposes, the generative models are utilized such that they can stochasticity be included in the _deterministic_ vanilla PINNs. Huang et al. (2022) introduces the use of an auto-decoder network to accelerate PINNs training from a meta-learning perspective. Different PDE parameters can be viewed as different tasks. Though latent representation is also included as an additional input parameter, the trained PINN still requires additional training for new PDE parameters. From the perspective of reduced order modeling, Kim et al. (2022) promotes the use of a masked autoencoder to learn the lower-dimensional manifold of the PDE parameters. They use the trained decoder and incorporate existing conventional PDE solvers instead of neural networks to solve the PDE. Finally, Zou and Karniadakis (2023) introduces the use of normalizing flow to perform density estimation for further downstream tasks. Extra training for new PDE parameters, however, is still required for those tasks.
### Incorporating latent vectors into PDE solvers
Almost any machine learning algorithm inherently tries to learn the hidden _latent_ representation between the input data that can be mapped into an output for a certain task. In a more restrictive manner, these latent variables can be constructed (and later accessed) such that they represent the data with a significant reduction in size. Ranade et al. (2021) incorporates latent vectors with existing numerical solvers to generalize the PINNs solutions. The idea of operating on a much lower-dimensional space for solving PDEs has been around under the banner of model order reduction (Lumley, 1967). Motivated by this idea, Wu et al. (2022) accelerates PINNs for solving large-scale time-dependent problems. Similarly, Kontolati et al. (2023) incorporates such an idea into a class of neural operators to achieve better accuracy.
To the best of our knowledge, we promote the first attempt to make PINNs learn solutions for a certain PDE parameter distribution without the need for additional training to accommodate new PDE parameters within the distribution. In such an approach, we aim to incorporate a generative feature into the trained PINNs model, such that it can generate various PDE solutions for various PDE parameters.
## 3 Methodology
Consider a general parametric PDE in the form of
\[u_{t}(\mathbf{x},t;\mathbf{\xi})+\mathcal{A}(u(\mathbf{x},t;\mathbf{\xi}))=\mathcal{S}(\mathbf{x },t;\mathbf{\xi}),\quad\mathcal{B}(u;\mathbf{\xi})=0,\quad t\in[0,T],\quad\mathbf{x}\in \Omega,\quad\mathbf{\xi}\in\mathcal{P}, \tag{1}\]
where \(u(\mathbf{x},t;\mathbf{\xi})\) denotes the desired field as a function of the spatio-temporal variables \([\mathbf{x},t]\), in which \(u_{t}\) corresponds to its first-order temporal derivative. Here, \(\mathbf{\xi}\) denotes the PDE parameters, and \(\Omega\) is a subset of a \(d\)-dimensional domain of real numbers, \(\mathbb{R}^{d}\). \(\mathcal{A}\) and \(\mathcal{F}\) denote the non-linear
operator and the source function, respectively, applied on the computational domain \(\Omega\), while \(\mathcal{B}\) defines the boundary operator applied on \(\partial\Omega\) (the surrounding surface). Conventional PINNs will then try to learn a mapping between the input (\(\mathbf{x},t\)) to the PDE solution \(u\) for that input location. This mapping is facilitated by a neural network \(\mathcal{N}(\mathbf{x},t;\mathbf{\xi},\mathbf{\theta})\), where \(\mathbf{\theta}\) denotes the network's parameters. The output of this network will then be regarded as a surrogate to the PDE solution \(\hat{u}\). During the PINNs' training, the PDE residuals and their boundary conditions are incorporated into the objective function in the form of
\[\mathcal{L} =\mathcal{L}_{\text{PDE}}+\mathcal{L}_{\text{boundary}}, \tag{2}\] \[=||\hat{u}_{t}(\mathbf{x},t;\mathbf{\xi})+\mathcal{A}(\hat{u}(\mathbf{x},t; \mathbf{\xi}))-\mathcal{S}(\mathbf{x},t;\mathbf{\xi})||_{2}^{2}+||\mathcal{B}(\hat{u}(\mathbf{ x},t;\mathbf{\xi}))||_{2}^{2}. \tag{3}\]
To this end, once the training is performed for specific PDE parameters \(\mathbf{\xi}\), additional training is required (either by transfer learning or within a meta learning framework) to obtain PDE solutions for other PDE parameters.
### Latent Representation Learning
The objective of most representation learning algorithms is to find a compressed representation of a dataset without losing any of its key information that distinguish the samples in the dataset. Given an input \(\mathbf{x}\), the objective function used for representation leanring can be broadly formulated as
\[\mathcal{L}=\mathcal{L}_{\text{reconstruction}}+\mathcal{L}_{\text{regularization}}. \tag{4}\]
Regularization is often introduced to balance the trade-off between semantic and perceptual compression (Esser et al., 2021). The reconstruction term can come in many flavors depending on the choice of the network architecture. In the simplest case, autoencoder networks (Rumelhart et al., 1985; Hinton and Salakhutdinov, 2006) utilize deep neural networks to both encode (the input data) and decode (the latent vector). Though they perform well in dimensionality reduction, the learned latent vectors do not represent the data distribution well for generative tasks. To solve this limitation, variational autoencoders (Kingma and Welling, 2013) resort to a Bayesian representation of the latent vector. Instead of mapping between vectors, they map the input vectors to their latent distributions. This results in a better learned posterior (data) distribution. Generative adversarial networks (GANs) (Goodfellow et al., 2020) replace this encoder-decoder scheme with a discriminator-generator pair that are trained in an adversarial manner. This results in a more representative learned posterior distribution than the variational autoencoder at the cost of more challenging training dynamics (e.g., mode collapse pathology). The rise of score-based diffusion models (Song and Ermon, 2019; Song et al., 2020, 2021) in outperforming GANs in representing the data distribution inspired us also to utilize this model and incorporate it into physics-informed neural networks. So, we utilize such powerful generative models to introduce a generative feature into physics-informed neural networks.
### PINNs with latent space inputs
Motivated by the recent success of latent-based models (Rombach et al., 2022), we introduce a two-stage training scheme into the PINNs training. During the first stage, a Kullback-Leibler divergence regularized autoencoder is utilized to learn the compressed latent representation of the PDE parameters. The resulting latent vectors (representing a Gaussian distribution mapping of the PDE parameters) will then be incorporated as an additional input into the PINNs training (second stage).
Figure 1 illustrates the overall workflow of the proposed method. As opposed to only using the position coordinates to train the PINNs \(\{\mathbf{x},\mathbf{x}_{s}\}\), we also incorporate the latent vectors \(\mathbf{z}\) as an additional input parameter. Specifically, the multi-layer perceptron used as the main PINN model outputs the PDE solution denoted by \(\hat{\tau}\). The PINN objective function, \(\mathbb{J}\), is formulated by the PDE residual and parameterized by the lateral derivatives \(\partial_{h}\hat{\tau}\) computed using backpropagation through the network. In this study, as we will see later, we consider modeling a solution for a class of level set equations given by the Eikonal equation. The objective then becomes predicting the PDE solution (traveltime fields) given knowledge of the PDE parameter (medium phase velocity fields \(v\)). We replace the true PDE parameters by the reconstruction result from the autoencoder \(\hat{v}\).
## 4 Numerical tests
In this section, we describe the setup of the problem, specifically the partial differential equation we aim to solve and the neural network model used to solve the problem, including its training parameters. Finally, we share the results of testing our approach.
### The PDE problem
We test the proposed method on a level-set-based PDE, often used as a tool for numerical analysis of surfaces and shapes. Specifically, the level-set model often performs numerical computations for surfaces on a fixed Cartesian grid without the need to parameterize these objects (Osher and Sethian, 1988). Considering a surface, like a wavefront, moving with speed \(V\) in the direction normal to the surface, the level set function \(\phi\) satisfies the following nonlinear first-order PDE:
\[\frac{\partial\phi(\mathbf{x},T)}{\partial T}=V(\mathbf{x})|\nabla\phi(\mathbf{x},T)|, \quad\mathbf{x}\in\mathbb{R}^{3}, \tag{5}\]
where \(T\) is the traveltime, and the operator \(|.|\) is the Euclidean norm.
In wave theory, the level set function \(\phi\) represents the phase of a wave, defining its geometrical shape in 2D or 3D as a function of time. In the high frequency asymptotic approximation, \(\phi(\mathbf{x},T)=\omega T(\mathbf{x})\), and equation 5 reduces to its Eikonal form as follows
\[\left(\nabla T(\mathbf{x})\right)^{2}=\frac{1}{V^{2}(\mathbf{x})},\quad\mathbf{x}\in \mathbb{R}^{3}, \tag{6}\]
where the solution of the PDE is represented by the traveltime field, \(T\). Thus, the PDE is parameterized by the phase velocity of the medium \(V\), which can be inhomogeneous.
In the following cases, we consider the problem of solving the Eikonal equation, which is equivalent to traveltime modeling for an inhomogeneous medium phase velocity model representing the PDE parameters. We, specifically, aim to solve the equation for a point source in a 2D Cartesian computational domain (\(\textbf{x}\in\{x,z\}\in\mathbb{R}^{2}\)). To mitigate source-related singularity, which is a known problem in solving eikonal equations numerically, we first substitute the traveltime field \(T\) with \(T_{0}\hat{\tau}\), known as the factorization approach. For details of its implementation in PINNs, we refer the readers to Taufik et al. (2023b). Here, we consider a domain of size 5 km (\(x\in[0,5]\)) horizontally and 1 km (\(z\in[0,1]\)) vertically. The velocity models are discretized in this domain with 128 points along each axis. For all examples, the source is located in the middle (\(\textbf{x}_{s}\in\{2.5,0.5\}\) km).
### The neural network setup for training
As described earlier, we incorporate a two-stage training workflow for latentPINN before direct inference on new PDE parameters (velocity models). In the first stage, we train an autoencoder to
Figure 1: The proposed workflow of the latentPINN.
learn a latent representation of the PDE parameters (velocity models), and in the second stage, we train a PINN that includes an added latent variables input. For the three examples below, we consider a latent dimension of 96 to be used on the bottleneck of the fully-connected layer.
The autoencoder used for the first stage of training is made up of an encoder part that takes three-channel images of size 128x128 pixels and performs downsampling across five convolutional blocks to yield images of size 4x4. Specifically, each convolutional block consists of a 2D transposed convolution layer of stride size 2 followed by a Gaussian error linear unit (GELU) activation function and a 2D convolution of stride size 1. To construct the convolutional block for the decoder part, we use identical layers as in the encoder in reverse order, including the GELU activation function. To obtain the latent vectors, a fully-connected layer with a Tanh activation is used at the end of the encoder network. For the training, an Adam optimizer is used with an initial learning rate of 1e-3 and a learning rate scheduler reducing the learning rate by half every 20 stagnating (non-decreasing reconstruction loss value) epochs. Finally, by incorporating a diffusion model into the trained autoencoder, we showcase the ability of latentPINN to facilitate a more representative PDE parameters sampling. This setup is generally used for all three PDE parameter sets tested below, except for the example where we make some changes to the autoencoder model.
In the second stage, we train a modified PINN that takes in, in addition to the coordinates of the medium, the latent variables that represent the PDE parameters (the velocity models). We also utilize the decoder to reconstruct the velocity model used in the loss function governed by the PDE in equation 6 (or specifically the factored variation of it) to train the PINN. The reason for using the decoder output instead of the original model, in the loss function of PINN, though they are close, is because the decoder output is a function of the latent vector through the decoder. We utilize a multi-layer perceptron network with an input layer of size 9 (\(\mathbf{x},\mathbf{x}_{s}\),\(\mathbf{z}\), and 96 latent vector variables), an output layer of size 1 (\(\hat{\tau}\)), 12 hidden layers of size 128 neurons each, and exponential linear unit (ELU) activation function. An Adam optimizer is used with an initial learning rate of 5e-4 and a scheduler to reduce the learning rate by half every 200 stagnating (non-decreasing PDE loss values) epochs. We randomly select 9830 collocation points as the training samples. We train the network with 100 velocity models for 10k epochs with a batch size of 163 points. We randomly sample these velocities (by their latent vectors) such that every iteration corresponds to solving for a specific velocity field. Again, this setup for the latentPINN is generally used to solve for the eikonal equation for all three tests below, except we make some changes for the last test.
All experiments were performed on a single NVIDIA Quadro RTX 8000 GPU card utilizing the PyTorch machine learning framework (Paszke et al., 2019). We will assess the accuracy of the learned traveltime functions by comparing their predictions with the numerical solutions. We also test the accuracy of the PINN solution by comparing the reconstructed velocity models (plugging in the predicted PDE solutions \(T\) into equation 6) to the output of the decoder \(\hat{V}\).
### Numerical Results
The goal here is to test the ability of the latent implementation to generalize on test PDE parameter examples not used in the training of latentPINN. To do so, we consider three sets of PDE parameters (phase velocity model distributions): 1) A set drawn from Gaussian random fields, 2) a set drawn from a Mechanical MNIST distribution, and finally, 3) a set related to examples of layering inside the Earth.
#### 4.3.1 Gaussian Random Fields
In the first scenario, we use Gaussian random fields (GRFs) to generate the set of PDE parameters for learning the latent space and training the latentPINN. The isotropic 2D spatial Gaussian random field is given by
\[\mathbf{x}=\mathcal{F}^{-1}\left(C\cdot\mathrm{n}^{2}\cdot\sqrt{2}\cdot\tau^{ \frac{1}{2}(2\alpha-2)}\cdot\left(\frac{4\pi^{2}(k_{x}^{2}+k_{x}^{\prime 2})+ \tau^{2}}{2}\right)^{-\alpha/2}\right),\quad\mathbf{x}\in\mathbb{R}^{2},\quad C \in\mathcal{N}(0,\mathrm{I}), \tag{7}\]
where an inverse Fourier transform (\(\mathcal{F}^{-1}\)) is performed on a correlation function parameterized by the pixel dimension \(n\), spatial wavenumbers \(k_{x}\in[0,\frac{\mathrm{n}}{2}]\), and parameters controlling the variance, \(\tau\) and \(\alpha\) in its exponent argument (\(\sigma=\tau^{0.5*(2*\alpha-\mathrm{n})}\) where \(\sigma\) is the intended variance). Specifically, we
generated 50k image samples (\(\sim\mathcal{N}(0,\text{I})\)) of dimension 128x128 with \(\tau=7\), \(\alpha=2.5\), and a periodic boundary condition. Out of the 50k three-channel image samples, 39k were used for the training set, 1k for validation, and the rest for testing. These near zero-mean images that range between -1 and 1 are then converted into velocity fields by first adding a constant of 2 km/s followed by a multiplication by a factor of 2. Thus, the resulting velocity range is [2,6] km/s. Figure 1(a) shows samples out of the 50k generated velocity fields.
For this PDE parameter distribution, we train an autoencoder to learn the corresponding latent representation for 30 epochs. Though slightly smoothed out, we can see the well-reconstructed images from the test set in Figure 1(b), reflecting that it learned the compressed latent representation. Out of the 39k training samples, 100 are used during the second stage, the main PINN training. Shown in the left-most column of each plot in Figures 4 and 3 are velocities (out of the 100 samples) used during the PINNs training, while the rest correspond to the unseen (testing) velocities. Going from left to right, for each row, we order the testing samples based on an increasing value of point-wise distance in the latent space. With such an order, we expect to have more dissimilar images with
Figure 2: Reconstruction analysis on three different datasets; Gaussian random fields (a and b), Mechanical MNIST examples (c and d), and realistic Earth models (e and f). The left column includes the original samples drawn from the testing set (a,c,e), and the right column includes the reconstructed samples (b,d,f).
Figure 3: Predicted PDE solutions (traveltimes, solid black lines), compared with conventional solvers (dashed yellow lines) from the three datasets; a) Gaussian random fields, b) Mechanical MNIST, and c) realistic Earth models. The leftmost column of each plot corresponds to solutions from a seen training PINN sample (velocity model), while the rest of the columns correspond to samples from the unseen testing set.
increasing point-wise distance. We can see in Figures 3(b) and 2(a) that the trained PINNs generalize to new velocities well. It is noteworthy to mention that the solutions for the velocities from the test set (of the Gaussian Random field set) are without any additional training, unlike for vanilla PINNs (Figure 4(a)).
#### 4.3.2 Mechanical MNIST
Next, we consider a slightly more complex dataset in the form of the solutions to the Cahn-Hilliard equation (Cahn and Hilliard, 1958). The fourth-order parabolic PDE details the phase separation dynamics of a two-phase mixture. In our case, we utilize such a dataset to understand the behavior of our approach in the face of a sparser (in pixel values) data distribution. We consider the use of a smoothed version of the Mechanical MNIST dataset Kobeissi and Lejeune (2022). We consider training, validation, and testing sets of sizes 15784, 1000, and 9420, respectively. The three-channel
Figure 4: Accuracy analysis on three different datasets from the reconstructed PDE parameters (using equation 6). The left column plots show the original samples drawn (a,c,e), and the right column plots include the reconstructed samples (b,d,f). The left most samples from each plot correspond to velocity models from the training set; the rest of the columns include results for unseen PDE parameters (from the testing set).
128x128 are converted into velocity fields using the same addition and multiplication constants as in the previous case (\(=2(x+2)\) km/s, where \(x\) is the Mechanical MNIST pixel value, between -1 and 1). Figure 1(c) illustrates samples drawn from the training set.
Using the same training configuration and autoencoder model as in the previous GRF case, we obtain after training well-reconstructed image samples from the testing set in Figure 1(d). We assess the learned latent vectors after the main PINNs training. For this training, we utilize a similar training configuration as in the previous case with a slight modification. Considering the complexity of this set (more variance), we double the hidden layers (from 12 to 24). We train the network for 10k epochs of batch size 163 points. We test the accuracy of the predicted PDE solutions on test samples given by the four rightmost columns in Figures 3(c) (input) and 3(d) (reconstruction). We utilize the trained encoder to provide latent vectors for the training of the PINN, and we use the decoder to provide the velocity model used in the PDE loss function. There is no clear distinction in the efficiency between randomly initialized and transfer learning training (Figure 4(b)). This might be related to the inherent limitations of the Eikonal equation in dealing with such complex PDE parameters set (velocity models). As we are dealing with sharper boundaries between two phases in the Mechanical MNIST dataset, the PINNs training becomes slightly more challenging than for the previous smoother spatial Gaussian random fields. Nonetheless, our trained latentPINN still provides new PDE solutions given new PDE parameters without additional PINN training.
#### 4.3.3 Realistic Earth Models
Finally, we consider a PDE parameter distribution for applications in traveltime modeling of the Earth subsurface in the form of realistic synthetic subsurface velocity models (Figure 1(e)). We compile patches of images from several 2D velocity images (Billette and Brandsberg-Dahl, 2005; Martin et al., 2006) and 3D velocity cubes (Aminzadeh et al., 1997; Naranjo et al., 2011) resulting in a 70,522 256x256 three-channel images. The dataset entails a wide range of geological settings ranging from highly faulted models to anomalous geological bodies (e.g., salts and low-velocity layers). We utilize a 90-5-5 percentage split to generate the training-validation-testing datasets. Figures 1(e) and 3(e) illustrate samples drawn from the datasets.
Figure 5: Loss curves comparison between our approach and several training mechanisms. The top row (a to c) compares the required additional epochs to train for a new PDE parameter between the trained latentPINN (blue stars) and conventional PINN training from randomly initialized weights (blue lines) and performing transfer learning (green lines). The bottom row (d to f) details the training cost to train our latentPINN model for 100 randomly chosen velocity models (PDE parameters) from the training set used previously in the autoencoder training.
For the first stage of the latentPINN training, we modify the encoder-decoder network to perform down- and upsampling of the higher-resolution images. Specifically, each convolutional block now consists of a 2D transposed convolution layer of stride size 4 followed by a GELU layer and a 2D convolution of stride size 1. We utilize two of these blocks in the encoder and decoder (reversing the order within the encoder convolution block) network. The same optimizer, latent vector size, and learning rate scheduler (as in the previous two cases) are used for the training. We train this model for 45 epochs with a batch size of 256 samples. The well-reconstructed images shown in Figure 1(f) demonstrate the well-represented data distribution from the compressed latent vectors. The quality of these vectors can be further attested from the PINNs training results shown in Figure 3(f). For this training, we utilize almost identical training dynamics (as in the Mechanical MNIST) with a slight change in the initial learning rate. We start the training with a 3e-4 learning rate. Our trained latentPINN manages to generalize to the new testing samples (the four rightmost columns in Figures 3(e) and 3(f)). The subtle changes (from left to right) can be captured by the trained PINN without additional training. Figure 4(c) further details the saving in training costs that would be otherwise needed in conventional approaches. Here, we only have to contend with the overhead cost of training the latentPINN.
### Functional Data Representations
One notable feature of our proposed latentPINN framework is its readily extensible to sample functions over the continuous domain (e.g., PDE parameters), termed _functional data_ representation. In such a task, we aim to perform infinite-dimensional sampling of the functional data accommodated by a neural network function. Such representation is an important feature for a wide range of PDE applications. For example, being able to perform this sampling benefits application related but not limited to imaging the Earth's interior (Yang et al., 2021), weather forecasting (Pathak et al., 2022), and fluid flow simulation (Wen et al., 2022).
To perform this sampling, we extend the trained autoencoder with a diffusion model. Specifically, we utilize the recent latent diffusion model in which the diffusion (generative modeling) process is performed over the learned latent space. We utilize a diffusion model with 1000 diffusion steps, a linear noise scheduler with initial and final constants of 0.0015 and 0.0195, respectively, attention resolution of {8,16}, an initial learning rate of 2e-6, a channel multiplier of size {1,2,3,4}, a channel head of size 32, and a batch size of 5 image samples. The training process requires 27 epochs for the Gaussian spatial random fields, 32 epochs for the Mechanical MNIST dataset, and 289 epochs for the realistic Earth models. Figures 6 showcase the unconditional sample images from the trained latent diffusion model.
## 5 Discussion
Incorporating neural networks in solving PDEs is an emerging field in the scientific machine learning community. So far, two broad approaches have emerged, differing in whether the NN facilitates the
Figure 6: Samples from the latent diffusion model for the functional data representation task. Shown from left to right are the resulting analysis on different datasets; a) Gaussian random fields, b) Mechanical MNIST, and c) realistic Earth model.
learning process to replace either the PDE operator (neural operators) or a solution to a particular PDE (PINN). The latter offers more flexibility and can be done completely independently of conventional solvers. Thus, it offers features that motivate us to use it to replace the less flexible, mesh-oriented, conventional solvers. However, to realize this replacement, we need to improve the training efficiency of PINNs to at least rival the numerical ones. This is needed because, unlike neural operators, the training of PINNs is part of the inference, as PINNs are often trained for a fixed set of PDE parameters. Further solutions for new PDE parameters, unlike neural operators, require additional training. In other words, PINNs are trained _deterministically_ for a specific parametric PDE. To bridge this gap, orthogonal to previous approaches in speeding up the training process, latent representation learning is introduced by appending the conventional coordinates input with learned latent vectors representing certain PDE parameters. With this additional information, we encourage the neural network to use its interpolation capabilities (some of which shown in Taufik et al. (2022)) in its lower-dimensional manifold representation of the solutions to generate the corresponding solution. In other words, when such input latent vectors are sufficiently learned to compress and express the distribution of the PDE parameters, the trained PINNs will possess _generative_ features. By doing so, though we have to sacrifice an increase in the training cost compared to the usual PINN training, such an increase does not scale one-to-one with the number of representative PDE parameters needed to train our latentPINN. For example, as depicted in Figure 5, the training cost for a hundred PDE parameters increases the training cost of the vanilla PINN (blue lines in Figure 5) by 13 times at best (the GRF case) and 33 times (the mechanical MNIST) at worst. This increase is, however, justifiable, as once the latentPINN is trained, a new PDE solution can be predicted instantly for a sample PDE parameter from the trained distribution (\(\sim\) 40k velocity fields) without additional NN training.
Our latentPINN implementation relies on the assumption of a well-represented PDE parameters distribution by its compressed latent representations. The main challenge in achieving such representations has been to harmonize the training samples reconstruction and generalization process to unseen samples. One significant limitation might arise when the new unseen PDE parameter lies far away from the training distribution. Our approach, like most of the neural operator models, will then either require re-training (from the first stage) or we resort to transfer learning to calculate the new PDE solution. In such scenarios, the calculated PDE solutions can be viewed as _ansatz_ for further PINNs training. With the recent development of latent diffusion models used in this study, we partially solve these trade-offs by promoting the use of an autoencoder to learn the compressed representation and diffusion models to sufficiently represent the posterior distribution of the PDE parameters. Specifically, a Kullback-Leibler divergence regularization is introduced for better generalization. Different means of regularization can also be used to achieve generalization during the latent compression Esser et al. (2021), which is worth investigating in future work.
As a byproduct of such a training scheme, not only does our framework facilitate PINN to learn a distribution of PDE parameters, its trained autoencoder can be further utilized for downstream tasks. One such task can be in the form of functional data representation. In this task, instead of facilitating surrogate maps for the PDE solution, the learned NN facilitates a functional representation for the infinite-dimensional vector spaces dubbed _functional data_. Using the trained autoencoder as its backbone, a latent diffusion model can be deployed to perform generative modeling on such functional data. Thus, we obtain much more realistic drawn samples compared to the previous approach (Seidman et al., 2023). Finally, in this study, we only consider unconditional sampling with such a model; further progress can include classes of the PDE parameters (i.e., geologic conditions for the Earth models or pattern classes for the mechanical MNIST dataset) to perform conditional sampling.
## 6 Conclusion
We proposed using learned compressed latent representations in physics-informed neural network training. More specifically, we utilize a latent diffusion model to learn compressed representations of the PDE parameters. These compressed representations are used as input samples, along with samples representing the coordinates of the solution space, to train a PINN to provide functional solutions of PDE for a range of PDE parameters. Though the proposed framework should comply with a more general class of parametric PDEs, we tested our algorithm on a first-order nonlinear PDE on three different datasets. With the additional latent vector input, the trained physics-informed neural networks are able to learn to generate PDE solutions as a function of PDE parameters. Thus,
this approach retains the flexibility and accuracy features of the functional representation of PINN solutions while gaining the generalization abilities of a neural operator.
## 7 Acknowledgement
The authors thank King Abdullah University of Science and Technology (KAUST) for supporting this research and the Seismic Wave Analysis group for the supportive and encouraging environment. This work utilized the resources of the Supercomputing Laboratory at KAUST in Thuwal, Saudi Arabia.
|
2304.04051 | Generating a Graph Colouring Heuristic with Deep Q-Learning and Graph
Neural Networks | The graph colouring problem consists of assigning labels, or colours, to the
vertices of a graph such that no two adjacent vertices share the same colour.
In this work we investigate whether deep reinforcement learning can be used to
discover a competitive construction heuristic for graph colouring. Our proposed
approach, ReLCol, uses deep Q-learning together with a graph neural network for
feature extraction, and employs a novel way of parameterising the graph that
results in improved performance. Using standard benchmark graphs with varied
topologies, we empirically evaluate the benefits and limitations of the
heuristic learned by ReLCol relative to existing construction algorithms, and
demonstrate that reinforcement learning is a promising direction for further
research on the graph colouring problem. | George Watkins, Giovanni Montana, Juergen Branke | 2023-04-08T15:41:01Z | http://arxiv.org/abs/2304.04051v1 | # Generating a Graph Colouring Heuristic with Deep Q-Learning and Graph Neural Networks
###### Abstract
The graph colouring problem consists of assigning labels, or colours, to the vertices of a graph such that no two adjacent vertices share the same colour. In this work we investigate whether deep reinforcement learning can be used to discover a competitive construction heuristic for graph colouring. Our proposed approach, ReLCol, uses deep Q-learning together with a graph neural network for feature extraction, and employs a novel way of parameterising the graph that results in improved performance. Using standard benchmark graphs with varied topologies, we empirically evaluate the benefits and limitations of the heuristic learned by ReLCol relative to existing construction algorithms, and demonstrate that reinforcement learning is a promising direction for further research on the graph colouring problem.
Keywords:Graph Colouring Deep Reinforcement Learning Graph Neural Networks
## 1 Introduction
The Graph Colouring Problem (GCP) is among the most well-known and widely studied problems in graph theory [12]. Given a graph \(G\), a solution to GCP is an assignment of colours to vertices such that adjacent vertices have different colours; the objective is to find an assignment that uses the minimum number of colours. This value is called the _chromatic number_ of \(G\), and denoted \(\chi(G)\). GCP is one of the most important and relevant problems in discrete mathematics, with wide-ranging applications from trivial tasks like sudoku through to vital logistical challenges like scheduling and frequency assignment [1]. Given that GCP has been proven to be NP-Complete for general graphs [15], no method currently exists that can optimally colour any graph in polynomial time. Indeed it is hard to find even approximate solutions to GCP efficiently [28] and currently no algorithm with reasonable performance guarantees exists [22].
Many existing methods for GCP fall into the category of _construction heuristics_, which build a solution incrementally. Designing an effective construction heuristic is challenging and time-consuming and thus there has been a lot of interest in ways to generate heuristics _automatically_. This has previously been very successful in, for example, job shop scheduling [8]. Among the simplest
construction methods for GCP are _greedy algorithms_[25], in which vertices are selected one by one and assigned the 'lowest' permissible colour based on some pre-defined ordering of colours.
In this work we investigate the use of reinforcement learning (RL) to learn a greedy construction heuristic for GCP by framing the selection of vertices as a sequential decision-making problem. Our proposed algorithm, ReLCol, uses deep Q-learning (DQN) [30] together with a graph neural network (GNN) [33, 5] to learn a policy that selects the vertices for our greedy algorithm. Using existing benchmark graphs, we compare the performance of the ReLCol heuristic against several existing greedy algorithms, notably Largest First, Smallest Last and DSATUR. Our results indicate that the solutions generated by our heuristic are competitive with, and in some cases better than, these methods. As part of ReLCol, we also present an alternative way of parameterising the graph within the GNN, and show that our approach significantly improves performance compared to the standard representation.
## 2 Related Work
#### 2.0.1 Graph Colouring
Methods for GCP, as with other combinatorial optimisation (CO) problems, can be separated into exact solvers and heuristic methods. Exact solvers must process an exponentially large number of solutions to guarantee optimality; as such, they quickly become computationally intractable as the size of the problem grows [26]. Indeed exact algorithms are generally not able to solve GCP in reasonable time when the number of vertices exceeds 100 [31].
When assurances of optimality are not required, heuristic methods offer a compromise between good-quality solutions and reasonable computation time. Heuristics may in some cases produce optimal solutions, but offer no guarantees for general graphs. Considering their simplicity, greedy algorithms are very effective: even ordering the vertices at random can yield a good solution. And crucially, for every graph there exists a vertex sequence such that greedily colouring the vertices in that order will yield an optimal colouring [25].
Largest-First (LF), Smallest-Last (SL) and DSATUR [9] are the three most popular such algorithms [20], among which DSATUR has become the de facto standard for GCP [32]. As such, we have chosen these three heuristics as the basis for our comparisons. Both LF and SL are _static_ methods, meaning the vertex order they yield is fixed at the outset. LF chooses the vertices in decreasing order by degree; SL also uses degrees, but selects the vertex \(v\) with smallest degree to go last, and then repeats this process with the vertex \(v\) (and all its incident edges) removed. Conversely, DSATUR is a _dynamic_ algorithm: at a given moment the choice of vertex depends on the previously coloured vertices. DSATUR selects the vertex with maximum _saturation_, where saturation is the number of distinct colours assigned to its neighbours. Similarly, the Recursive Largest First algorithm [23] is dynamic. At each step it finds a maximal independent set and assigns the same colour to the constituent vertices. The coloured vertices are then removed from the graph and the process repeats.
_Improvement algorithms_ take a different approach: given a (possibly invalid) colour assignment, these methods use local search to make small adjustments in an effort to improve the colouring, either by reducing the number of colours used or eliminating conflicts between adjacent vertices. Examples include TabuCol [17], simulated annealing [2] and evolutionary algorithms [14].
#### 2.0.2 Machine Learning methods for CO problems
Given how difficult it is to solve CO problems exactly, and the reliance on heuristics for computationally tractable methods, machine learning appears to be a natural candidate for addressing problems like GCP. Indeed there are many examples of methods for CO problems that use RL [29, 4, 19] or other machine learning techniques [35, 6].
For GCP, supervised learning has been used to predict the chromatic number of a graph [24] but this is dependent on having a labelled training dataset. Given the computational challenges inherent in finding exact solutions, this imposes limitations on the graphs that can be used for training. Conversely, RL does not require a labelled dataset for training. In [41], RL is used to support the local search component of a hybrid method for the related \(k\)-GCP problem by learning the probabilities with which each vertex should be assigned to each colour. While iteratively solving \(k\)-GCP for decreasing values of \(k\) is a valid (and reasonably common) method for solving GCP [27, 14], it is inefficient.
On the other hand, [18] addresses GCP directly with a method inspired by the success of AlphaGo Zero [34]. In contrast to greedy algorithms, this approach uses a pre-determined vertex order and learns the mechanism for deciding the _colours_. During training they use a computationally demanding Monte Carlo Tree Search using 300 GPUs; due to the computational overhead and lack of available code we were unable to include this algorithm in our study.
Our proposed algorithm is most closely related to [16], in which the authors present a greedy construction heuristic that uses RL with an attention mechanism [37] to select the vertices. There are, however, several key differences between the two methods. Their approach uses the REINFORCE algorithm [40] whereas we choose DQN [30] because the action space is discrete; they incorporate spatial and temporal locality biases; and finally, we use a novel state parameterisation, which we show improves the performance of our algorithm.
## 3 Problem Definition
A _\(k\)-colouring_ of a graph \(G=(V,E)\) is a partition of the vertices \(V\) into \(k\) disjoint subsets such that, for any edge \((u,v)\in E\), the vertices \(u\) and \(v\) are in different subsets. The subsets are typically referred to as _colours_. GCP then consists of identifying, for a given graph \(G\), the minimum number of colours for which a \(k\)-colouring exists and the corresponding colour assignment. This number is known as the _chromatic number_ of \(G\), denoted \(\chi(G)\).
Given any graph, a greedy construction heuristic determines the order in which vertices are to be coloured, sequentially assigning to them the lowest permissible colour according to some pre-defined ordering of colours. In this
work we address the problem of automatically deriving a greedy construction heuristic that colours general graphs using as few colours as possible.
## 4 Preliminaries
#### 4.0.1 Markov Decision Processes
A Markov Decision Process (MDP) is a discrete-time stochastic process for modelling the decisions taken by an agent in an environment. An MDP is specified by the tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) represent the state and action spaces; \(\mathcal{P}\) describes the environment's transition dynamics; \(\mathcal{R}\) is the reward function; and \(\gamma\) is the discount factor. The goal of reinforcement learning is to learn a decision policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) that maximises the expected sum of discounted rewards, \(\mathbb{E}\left[\sum_{t=1}^{\infty}\gamma^{t}R_{t}\right]\).
#### 4.0.2 Deep Q-Learning
Q-learning [38] is a model-free RL algorithm that learns \(Q^{*}(s,a)\), the value of taking action \(a\) in a state \(s\) and subsequently behaving optimally. Known as the _optimal action value function_, \(Q^{*}(s,a)\) is defined as
\[Q^{*}(s,a)=\mathbb{E}\left[\sum_{i=1}^{\infty}\gamma^{i}R_{t+i}\ \middle|\ S_{t}=s,A_{t}=a\right] \tag{1}\]
where \(S_{t},A_{t}\) and \(R_{t}\) are random variables representing respectively the state, action and reward at timestep \(t\). _Deep_ Q-learning (DQN) [30] employs a Q-network parameterised by weights \(\theta\) to approximate \(Q^{*}(s,a)\). Actions are chosen greedily with respect to their values with probability \(1-\epsilon\), and a random action is taken otherwise to facilitate exploration. Transitions \((s,a,r,s^{\prime})\) - respectively the state, action, reward and next state - are added to a buffer and the Q-network is trained by randomly sampling transitions, backpropagating the loss
\[L(\theta)=\left(\left[r+\gamma\max_{a^{\prime}}Q_{\hat{\theta}}(s^{\prime},a^{ \prime})\right]-Q_{\theta}(s,a)\right)^{2} \tag{2}\]
and updating the weights using stochastic gradient descent. Here \(Q_{\theta}(s,a)\) and \(Q_{\hat{\theta}}(s,a)\) are estimates of the value of state-action pair \((s,a)\) using the Q-network and a _target network_ respectively. The target network is a copy of the Q-network, with weights that are updated via periodic soft updates, \(\hat{\theta}\leftarrow\tau\theta+(1-\tau)\hat{\theta}\). Using the target network in the loss helps to stabilise learning [30].
#### 4.0.3 Graph Neural Networks
Graph neural networks (GNNs) [33, 5] support learning over graph-structured data. GNNs consist of blocks; the most general GNN block takes a graph \(G\) with vertex-, edge- and graph-level features, and outputs a new graph \(G^{\prime}\) with the same topology as \(G\) but with the features replaced by vertex-, edge- and graph-level embeddings [5]. The embeddings are generated via a message-passing and aggregation mechanism whereby information flows between pairs of neighbouring vertices. Stacking multiple GNN blocks allows for more complex dependencies to be captured. The steps within a single GNN block are demonstrated in Figure 1; in our method we do not use graph-level features so for simplicity these have been omitted.
## 5 Methodology
### Graph colouring as a Markov Decision Process
#### 5.1.1 States
A state \(s\in\mathcal{S}\) for graph \(G=(V,E)\) is a vertex partition of \(V\) into subsets \(P_{i}\), \(i\in\{-1,0,1,2,...\}\). For \(i\neq-1\), the partition \(P_{i}\) contains the vertices currently assigned colour \(i\), and \(P_{-1}\) represents the set of currently un-coloured vertices. States in which \(P_{-1}=\emptyset\) are terminal. Our method for parameterising the state, which results in improved performance, is described in Section 5.2.
#### 5.1.2 Actions
An action \(a\in\mathcal{A}\) is an un-coloured vertex (i.e. \(a\in P_{-1}\)) indicating the next vertex to be coloured. The complete mechanism by which ReLCol chooses actions is described in Section 5.4.
#### 5.1.3 Transition function
Given an action \(a\), the transition function \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\to\mathcal{S}\) updates the state \(s\) of the environment to \(s^{\prime}\) by assigning the lowest permissible colour to vertex \(a\). Note that choosing colours in this way does not preclude finding an optimal colouring [25] as every graph admits a sequence that will yield an optimal colouring. The transition function \(\mathcal{P}\) is deterministic: given a state \(s\) and an action \(a\), there is no uncertainty in the next state \(s^{\prime}\).
#### 5.1.4 Reward function
For GCP, the reward function should encourage the use of fewer colours. As such our reward function for the transition \((s,s^{\prime})\) is defined as
\[\mathcal{R}(s,s^{\prime})=-1(C(s^{\prime})-C(s))\]
where \(C(s)\) indicates the number of colours used in state \(s\).
Figure 1: Demonstration of how a GNN block generates edge and vertex embeddings. Blue indicates the element that is updated in that step. _i)_ The input graph with edge and vertex features; _ii)_ For each edge \(e\), concatenate the features of \(e\) with the features of the vertices it connects, and pass the resulting vector through a small neural network to generate the edge embedding \(emb_{e}\); _iii)_ For each vertex \(v\), aggregate the embeddings of the incident edges using an elementwise operation like _sum_ or _max_ to generate the edge aggregation \(agg_{v}\); _iv)_ For each vertex \(v\), concatenate the features of \(v\) with the associated edge aggregation \(agg_{v}\), and pass the resulting vector through a small neural network to generate the vertex embedding \(emb_{v}\). Blocks can be stacked by repeating this process with the previous block’s edge and vertex embeddings used as the features.
#### 5.1.2 Discount factor
GCP is an episodic task, with an episode corresponding to colouring a single graph \(G\). Given that each episode is guaranteed to terminate after \(n\) steps, where \(n\) is the number of vertices in \(G\), we set \(\gamma=1\). Using \(\gamma<1\) would bias the heuristic towards deferring the introduction of new colours, which may be undesirable.
### Parameterising the state
Recall that for the graph \(G=(V,E)\), a state \(s\in\mathcal{S}\) is a partition of \(V\) into subsets \(P_{i}\), \(i\in\{-1,0,1,2,...\}\). We represent the state using a _state graph_\(G_{s}=(V,E_{s},F_{s}^{v},F_{s}^{e})\): respectively the vertices and edges of \(G_{s}\), together with the associated vertex and edge features.
#### 5.2.1 State graph vertices
Note that the vertices in \(G_{s}\) are the same as the vertices in the original graph \(G\). Then, given a state \(s\), the feature \(f_{s}^{v}\) of vertex \(v\) is a 2-tuple containing: i) A vertex name \(\in\{0,1,2,...,n-1\}\) and ii) The current vertex colour \(c_{v}\in\{-1,0,1,2,...\}\) (where \(c_{v}=-1\) if and only if \(v\) has not yet been assigned a colour).
#### 5.2.2 State graph edges
In the standard GNN implementation, messages are only passed between vertices that are joined by an edge. In our implementation we choose to represent the state as a complete graph on \(V\) to allow information to flow between all pairs of vertices. We use a binary edge feature \(f_{s}^{e}\) to indicate whether the corresponding edge was in the original graph \(G\):
\[f_{s}^{e}=\begin{cases}-1&\text{if }e=(v_{i},v_{j})\in E\\ 0&\text{otherwise}\end{cases}\]
Our state parameterisation, which is the input to the Q-network, allows messages to be passed between all pairs of vertices, including those that are not connected; in Section 6.4 we show that this representation results in improved performance.
### Q-network architecture
Our Q-network is composed of several stacked GNN blocks followed by a feed-forward neural network of fully connected layers with ReLU activations. Within the aggregation steps in the GNN we employ an adaptation of Principal Neighbourhood Aggregation [10] which has been shown to mitigate information loss. The GNN takes the state graph \(G_{s}\) as input, and returns a set of vertex embeddings. For each vertex \(v\), the corresponding embedding is passed through the fully connected layers to obtain the value for the action of choosing \(v\) next.
### Selecting actions
In general actions are selected using an \(\epsilon\)-greedy policy with respect to the vertices' values. However using the Q-network is (relatively) computationally expensive. As such, where it has no negative effect - in terms of the ultimate number of colours used - we employ alternative mechanisms to select vertices.
#### 5.4.1 First Vertex rule
The first vertex to be coloured is selected at random. By Proposition 1, this does not prevent an optimal colouring being found.
Proposition 1: _An optimal colouring remains possible regardless of the first vertex to be coloured._
**Proof.** Let \(G=(V,E)\) be a graph with \(\chi(G)=k^{*}\), and \(\mathcal{P}^{*}=P_{0}^{*},P_{1}^{*},...,P_{k^{*}-1}^{*}\) an optimal colouring of \(G\), where the vertices in \(P_{i}^{*}\) are all assigned colour \(i\). Suppose vertex \(v\) is the first to be selected, and \(j\) is the colour of \(v\) in \(\mathcal{P}^{*}\) (i.e. \(v\in P_{j}^{*}\), where \(0\leq j\leq k^{*}-1\)). Simply swap the labels \(P_{0}^{*}\) and \(P_{j}^{*}\) so that the colour assigned to \(v\) is the 'first' colour. Now, using this new partition, we can use the construction described in [25] to generate an optimal colouring. \(\blacksquare\)
#### 5.4.2 Isolated Vertices rule
We define a vertex to be _isolated_ if all of its neighbours have been coloured. By Proposition 2, we can immediately colour any such vertices without affecting the number of colours required.
Proposition 2: _Immediately colouring isolated vertices has no effect on the number of colours required to colour the graph._
**Proof.** Let \(G=(V,E)\) be a graph with \(\chi(G)=k^{*}\). Suppose also that \(\mathcal{P}=P_{-1},P_{0},P_{1},...,P_{k^{*}-1}\) is a partial colouring of \(G\) (with \(P_{-1}\) the non-empty set of un-coloured vertices). Let \(v\in P_{-1}\) be an un-coloured, isolated vertex (i.e. it has no neighbours in \(P_{-1}\)). No matter when \(v\) is selected, its colour will be the first that is different from all its neighbours. Also, given that \(v\) has no un-coloured neighbours, it has no influence on the colours assigned to subsequent vertices. Therefore \(v\) can be chosen at any moment (including immediately) without affecting the ultimate number of colours used. \(\blacksquare\)
### The ReLCol algorithm
Fig. 2 demonstrates how the state graph is constructed, and how it evolves as the graph is coloured. The full ReLCol algorithm is presented in Algorithm 1.
```
1:\(G=(V,E)\), \(\mathcal{P}=P_{0}^{*},P_{1}^{*},...,P_{k^{*}-1}^{*}\), \(P_{0}^{*},P_{1}^{*},.
#### 6.0.1 Architecture and hyperparameters
Our Q-network consists of 5 GNN blocks and 3 fully connected layers with weights initialised at random. Our GNN blocks use only edge and vertex features; we experimented with including global features but found no evidence of performance improvement. The vertex and edge embeddings, as well as the hidden layers in all fully connected neural networks, have 64 dimensions. We use the Adam optimiser [21] with learning rate 0.001 and batch size 64, \(\tau=0.001\), and an \(\epsilon\)-greedy policy for exploration, where \(\epsilon\) decays exponentially from 0.9 to 0.01 through 25000 episodes of training.
#### 6.0.2 Training data
We have constructed a dataset of 1000 training graphs of size \(n\in[15,50]\), composed of 7 different graph types: Leighton graphs [23], Queen graphs [13], Erdos-Renyi graphs [11], Watts-Strogatz graphs [39], Barabasi-Albert graphs [3], Gaussian Random Partition graphs [7] and graphs generated by our own method which constructs graphs with known upper bound on the chromatic number. Each training graph was constructed by choosing its type and size uniformly at random. Where the generating process of the chosen graph type has its own parameters these too were chosen at random to ensure as much diversity as possible amongst the training graphs. Our datasets, together with our code, are available on GitHub1.
Footnote 1: [https://github.com/gpdwatkins/graph_colouring_with_RL](https://github.com/gpdwatkins/graph_colouring_with_RL)
### Comparison with existing algorithms
We first compare ReLCol to existing construction algorithms, including Largest First, Smallest Last, DSATUR and Random (which selects the vertices in a random order). We also compare to the similar RL-based method presented in
Figure 2: Example of graph parameterisation and colouring rules, where \(v_{t}=j\) indicates the vertex with name \(j\) is selected at step _t. i)_ Initial parameterisation of the original graph; _ii)_ First vertex \(v_{0}=0\) is selected at random and assigned colour 0; _iii)_ Vertex \(v_{1}=4\) can also be assigned colour 0; _iv)_ Vertex \(v_{2}=3\) cannot be assigned colour 0 so takes colour 1; _v)_ Vertex \(v_{3}=2\) cannot be assigned colour 0 or 1 so takes colour 2, leaving vertex 1 isolated; vertex 1 cannot be assigned colour 0 so takes colour 1.
Gianinazzi et al. [16]. Using their implementation we generate 12 heuristics with different random seeds and report the average result. Note that in their paper the authors present both a deterministic and a stochastic heuristic, with the stochastic version generated by taking a softmax over the action weights. At test time their stochastic heuristic is run 100 times and the best colouring is returned. Given that with enough attempts even a random algorithm will eventually find an optimal solution, we consider heuristics that return a colouring in a single pass to be more interesting. As such we consider only the deterministic versions of the Gianinazzi et al. algorithm and ReLCol.
Each heuristic is applied to the benchmark graphs used in [24] and [16], which represent a subset of the graphs specified in the _COLOR02: Graph Colouring and its Generalizations_ series2. For these graphs the chromatic number is known; as such we report the _excess_ number of colours used by an algorithm (i.e. 0 would mean the algorithm has found an optimal colouring for a graph). The results are summarised in Table 6.1.
Footnote 2: [https://mat.tepper.cmu.edu/COLOR02/](https://mat.tepper.cmu.edu/COLOR02/)
On average over all the graphs DSATUR was the best performing algorithm, using 1.2 excess colours, closely followed by our heuristic with 1.35 excess colours. The other tested algorithms perform significantly worse on these graphs, even slightly worse than ordering the vertices at random. The test set contains a mix of easier graphs - all algorithms manage to find the chromatic number for huck - as well as harder ones - even the best algorithm uses four excess colours on queen13_13. DSATUR and ReLCol each outperform all other methods on 4 of the graphs.
### A class of graphs on which ReLCol outperforms DSATUR
Although the previous results suggest that ReLCol does not outperform DSATUR on general graphs, there do exist classes of graphs on which DSATUR is known to perform poorly. One such class is presented in [36]; these graphs, which we refer to as _Spinrad graphs_, are constructed as follows:
1. Fix the number of vertices \(n\) such that \(n\pmod{7}=3\), and let \(m=\frac{n+4}{7}\).
2. Partition the vertices into \(5\) disjoint sets as follows: \[A=\{a_{1},a_{2},\cdots,a_{m-2}\} B=\{b_{1},b_{2},\cdots,b_{m-1}\} C=\{c_{2},c_{3},\cdots,c_{m}\}\] \[B^{\prime}=\{b^{\prime}_{1},b^{\prime}_{2},\cdots,b^{\prime}_{2m }\} C^{\prime}=\{c^{\prime}_{1},c^{\prime}_{2},\cdots,c^{\prime}_{2m}\}\]
3. Add the following sets of edges: \[E^{B}_{A}=\{(a_{i},b_{j})\colon i\neq j\} E^{C}_{A}=\{(a_{i},c_{j})\colon i<j\} E^{C}_{B}=\{(b_{i-1},c_{i})\colon 2<i<m\}\] Plus: * \(\forall b\in B\), add edges to vertices in \(B^{\prime}\) such that the degree of \(b\) is \(2m\). * \(\forall c\in C\), add edges to vertices in \(C^{\prime}\) such that the degree of \(c\) is \(2m\).
\begin{table}
\begin{tabular}{|c c c c c c c c c|} \hline
**Graph** & \(n\) & \(\chi\) & **Random** & **LF** & **SL** & **DSATUR** &
\begin{tabular}{c} **Gianinazzi** \\ **et al.** \\ \end{tabular} & **ReLCol** \\ \hline queen5\_5 & 25 & 5 & 2.3\({}^{\pm 0.1}\) & 2 & 3 & **0** & 2.1\({}^{\pm 0.3}\) & 0.2\({}^{\pm 0.11}\) \\ queen6\_6 & 36 & 7 & 2.4\({}^{\pm 0.06}\) & 2 & 4 & 2 & 3.2\({}^{\pm 0.16}\) & **1\({}^{\pm 0}\)** \\ myciel5 & 47 & 6 & 0.1\({}^{\pm 0.03}\) & 0 & 0 & 0 & 0.1\({}^{\pm 0.08}\) & 0\({}^{\pm 0}\) \\ queen7\_7 & 49 & 7 & 4\({}^{\pm 0.06}\) & 5 & 3 & 4 & 3.3\({}^{\pm 0.32}\) & **2.2\({}^{\pm 0.16}\)** \\ queen8\_8 & 64 & 9 & 3.4\({}^{\pm 0.07}\) & 4 & 5 & 3 & 3.3\({}^{\pm 0.24}\) & **2.1\({}^{\pm 0.14}\)** \\
1-Insertions\_4 & 67 & 4 & 1.2\({}^{\pm 0.04}\) & 1 & 1 & 1 & 1\({}^{\pm 0}\) & 1\({}^{\pm 0}\) \\ huck & 74 & 11 & 0\({}^{\pm 0}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0\({}^{\pm 0}\) \\ jean & 80 & 10 & 0.3\({}^{\pm 0.05}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0.3\({}^{\pm 0.12}\) \\ queen9\_9 & 81 & 10 & 3.9\({}^{\pm 0.07}\) & 5 & 5 & 3 & 5\({}^{\pm 0}\) & **2.7\({}^{\pm 0.25}\)** \\ david & 87 & 11 & 0.7\({}^{\pm 0.07}\) & 0 & 0 & 0 & 0.4\({}^{\pm 0.14}\) & 1.2\({}^{\pm 0.33}\) \\ mug88\_1 & 88 & 4 & 0.1\({}^{\pm 0.02}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0\({}^{\pm 0}\) \\ myciel6 & 95 & 7 & 0.3\({}^{\pm 0.05}\) & 0 & 0 & 0 & 0.8\({}^{\pm 0.17}\) & 0\({}^{\pm 0}\) \\ queen8\_12 & 96 & 12 & 3.3\({}^{\pm 0.06}\) & 3 & 3 & **2** & 4\({}^{\pm 0}\) & 2.6\({}^{\pm 0.18}\) \\ games120 & 120 & 9 & 0\({}^{\pm 0.01}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0\({}^{\pm 0}\) \\ queen11\_11 & 121 & 11 & 5.9\({}^{\pm 0.07}\) & 6 & 6 & **4** & 6\({}^{\pm 0}\) & 5.4\({}^{\pm 0.25}\) \\ anna & 138 & 11 & 0.2\({}^{\pm 0.04}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0.4\({}^{\pm 0.18}\) \\
2-Insertions\_4 & 149 & 4 & 1.5\({}^{\pm 0.05}\) & 1 & 1 & 1 & 1\({}^{\pm 0}\) & 1\({}^{\pm 0}\) \\ queen13\_13 & 169 & 13 & 6.5\({}^{\pm 0.07}\) & 10 & 9 & **4** & 8\({}^{\pm 0}\) & 6.3\({}^{\pm 0.24}\) \\ myciel7 & 191 & 8 & 0.6\({}^{\pm 0.06}\) & 0 & 0 & 0 & 0.3\({}^{\pm 0.14}\) & 0.2\({}^{\pm 0.11}\) \\ homer & 561 & 13 & 1\({}^{\pm 0.07}\) & 0 & 0 & 0 & 0\({}^{\pm 0}\) & 0.6\({}^{\pm 0.22}\) \\ \hline Average & & & 1.9\({}^{\pm 0.01}\) & 1.95 & 2 & 1.2 & 1.92\({}^{\pm 0.05}\) & 1.35\({}^{\pm 0.05}\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of ReLCol with other construction algorithms on graphs from the COLOR02 benchmark dataset. Values indicate how many more colours are required than the chromatic number, \(\chi\). For each graph, Random is run 100 times and the average and standard error are reported. For Gianinazzi et al. and ReLCol, the 12 heuristics are run and the average and standard error are reported. A bold number indicates that an algorithm has found the unique best colouring amongst the algorithms.
An example of such a graph with \(m=4\) is shown in Fig. 3. Note that some of the vertices in \(B^{\prime}\) and \(C^{\prime}\) may be disconnected; they exist simply to ensure that the vertices in \(B\) and \(C\) all have degree \(2m\).
The vertices can be partitioned into 3 disjoint sets \(A\cup B^{\prime}\cup C^{\prime}\), \(B\) and \(C\). Given that there are no edges between pairs of vertices in the same set, the chromatic number of the graph is 3. However the DSATUR algorithm assigns the same colour to vertices \(a_{i}\), \(b_{i}\) and \(c_{i}\), meaning it uses \(m\) colours for the whole graph. A proof of this is provided in [36].
Fig. 4 shows that ReLCol vastly outperforms DSATUR on Spinrad graphs, in most cases identifying an optimal colouring using the minimum number of colours. This indicates that despite their similar performance on general graphs, ReLCol has learned a heuristic that is selecting vertices differently to DSATUR.
Figure 4: ReLCol outperforms DSATUR on Spinrad graphs. Error bars show the maximum and minimum number of colours used by the 12 ReLCol-generated heuristics.
Figure 3: Example of a Spinrad graph on 24 vertices, generated with \(m=4\).
### Scalability of ReLCol
While we have shown that ReLCol is competitive with existing construction heuristics, and can outperform them on certain graph classes, our results suggest that the ability of ReLCol to colour general graphs effectively may reduce when the test graphs are significantly larger than those used for training.
This can be observed in Fig. 5, which compares the performance of DSATUR, ReLCol, and Random on graphs of particular sizes generated using the same process as our training dataset. The degradation in performance could be a result of the nature of the training dataset, whose constituent graphs have no more than 50 vertices: for graphs of this size DSATUR and ReLCol seem to achieve comparable results, and much better than Random, but the performance of ReLCol moves away from DSATUR towards Random as the graphs grow in size.
### Representing the state as a complete graph
To demonstrate the benefit of the proposed complete graph representation, we compare ReLCol with a version that preserves the topology of the original graph, meaning that within the GNN, messages are only passed between pairs of adjacent vertices. Fig. 6 compares the number of colours used by each version when applied to a validation dataset periodically during training. The validation dataset is composed of 100 graphs generated by the same mechanism as the training dataset. The complete graph representation clearly leads to faster learning and significantly better final performance.
Figure 5: As the graph size increases, the performance of ReLCol moves from being similar to DSATUR towards Random. This suggests that there are limitations to how well ReLCol generalises to graphs larger than those seen during training.
## 7 Conclusions
We have proposed ReLCol, a reinforcement learning algorithm based on graph neural networks that is able to learn a greedy construction heuristic for GCP. The ReLCol heuristic is competitive with DSATUR, a leading greedy algorithm from the literature, and better than several other comparable methods. We have demonstrated that part of this success is due to a novel (to the best of our knowledge) complete graph representation of the graph within the GNN. Since our complete graph representation seems to perform much better than the standard GNN representation, we intend to investigate its effect in further RL tasks with graph-structured data. We also plan to incorporate techniques for general-isability from the machine learning literature to improve the performance of the ReLCol heuristic on graphs much larger than the training set.
An advantage of automatically generated heuristics is that they can be tuned to specific classes of problem instances by amending the training data, so exploring the potential of ReLCol to learn an algorithm tailored to specific graph types would be an interesting direction. Finally, given that the ReLCol heuristic appears to work quite differently from DSATUR, further analysis of how it selects vertices may yield insights into previously unknown methods for GCP.
## Acknowledgements
G. Watkins acknowledges support from EPSRC under grant EP/L015374/1.
G. Montana acknowledges support from EPSRC under grant EP/V024868/1.
We thank L. Gianinazzi for sharing the code for the method presented in [16].
Figure 6: Our complete graph representation results in faster learning and better final performance compared to the standard GNN representation. |
2304.01897 | InfluencerRank: Discovering Effective Influencers via Graph
Convolutional Attentive Recurrent Neural Networks | As influencers play considerable roles in social media marketing, companies
increase the budget for influencer marketing. Hiring effective influencers is
crucial in social influencer marketing, but it is challenging to find the right
influencers among hundreds of millions of social media users. In this paper, we
propose InfluencerRank that ranks influencers by their effectiveness based on
their posting behaviors and social relations over time. To represent the
posting behaviors and social relations, the graph convolutional neural networks
are applied to model influencers with heterogeneous networks during different
historical periods. By learning the network structure with the embedded node
features, InfluencerRank can derive informative representations for influencers
at each period. An attentive recurrent neural network finally distinguishes
highly effective influencers from other influencers by capturing the knowledge
of the dynamics of influencer representations over time. Extensive experiments
have been conducted on an Instagram dataset that consists of 18,397 influencers
with their 2,952,075 posts published within 12 months. The experimental results
demonstrate that InfluencerRank outperforms existing baseline methods. An
in-depth analysis further reveals that all of our proposed features and model
components are beneficial to discover effective influencers. | Seungbae Kim, Jyun-Yu Jiang, Jinyoung Han, Wei Wang | 2023-04-04T15:48:08Z | http://arxiv.org/abs/2304.01897v2 | InfluencerRank: Discovering Effective Influencers via Graph Convolutional Attentive Recurrent Neural Networks
###### Abstract
As influencers play considerable roles in social media marketing, companies increase the budget for influencer marketing. Hiring effective influencers is crucial in social influencer marketing, but it is challenging to find the right influencers among hundreds of millions of social media users. In this paper, we propose _InfluencerRank_ that ranks influencers by their effectiveness based on their posting behaviors and social relations over time. To represent the posting behaviors and social relations, the graph convolutional neural networks are applied to model influencers with heterogeneous networks during different historical periods. By learning the network structure with the embedded node features, _InfluencerRank_ can derive informative representations for influencers at each period. An attentive recurrent neural network finally distinguishes highly effective influencers from other influencers by capturing the knowledge of the dynamics of influencer representations over time. Extensive experiments have been conducted on an Instagram dataset that consists of 18,397 influencers with their 2,952,075 posts published within 12 months. The experimental results demonstrate that _InfluencerRank_ outperforms existing baseline methods. An in-depth analysis further reveals that all of our proposed features and model components are beneficial to discover effective influencers.
1 Department of Computer Science and Engineering, University of South Florida
2 Department of Computer Science, University of California, Los Angeles
3 Department of Applied Artificial Intelligence, Sungkyunkwan University
[email protected], [email protected], [email protected], [email protected], [email protected]
## Introduction
Influencers are known as individuals who influence a magnificent number of people on social media. This, in turn, has attracted great attention to marketers since influencers and their huge fan bases can be considered as marketing channels and audiences, respectively [16, 17]. More recently, companies have started hiring influencers to advertise products for targeted audiences and expand brand awareness.
Due to the rapid growth of social media and influencer marketing, discovering effective influencers on social media has become increasingly important [18, 19, 20]. For measuring user influence on social media, well-known metrics, such as the numbers of followers, retweets, and mentions, have been widely applied [1, 17]. In addition, information propagation [14, 21, 22], social connections [13], network centrality [12], transparency [15], and multi-relational network [16] have been used to identify influencers on social media. Among the various measures, the _effectiveness of influence_[18], often measured by the engagement rate [19, 17, 18, 16], has been considered as crucial in identifying effective influencers especially in the marketing domain. The engagement rate can be calculated as the ratio of the average number of likes to the number of followers, which essentially shows how much audiences engage with the corresponding influencer.
To discover the effective influencers (i.e., influencers with high engagement rates), previous work used posting behaviors of influencers or characteristics of their posts. For example, Romero et al. (2011), Liu et al. (2015), and Feng et al. (2018) utilized the social networks among influencers; Li, Lai, and Chen (2011) analyzed post contents to derive statistical features in identifying influencers. However, none of these studies jointly and comprehensively modeled posting behaviors, post characteristics, and social networking behav
Figure 1: The number of likes on posts published by two influencers across the time. Although two influencers have similar numbers of followers, the average numbers of likes (i.e., dotted lines) are significantly different. Additionally, the number of likes dynamically changes over time.
iors, which may result in a biased or partial representation of the effectiveness of influencer. For example, as shown in Figure 1, two influencers have similar numbers of followers hence they may be considered as having similar effectiveness, but their actual engagement rates are shown to be significantly different. Although some methods applied the PageRank algorithm on influencer-content graphs Silva et al. (2013) and independently derived the features of influencers and posts Liu et al. (2015), the PageRank algorithm can be biased to a certain type of nodes Brezinski and Redivo-Zaglia (2006) when the relations between influencers and posts are ignored by independent features.
To address this issue, we propose to use a heterogeneous network to model the effectiveness of influencers with their posting behavior, social networking behavior, and post characteristics together. In addition, considering historical behavioral patterns can be further beneficial to discover effective influencers since posting behavior of an influencer can change dynamically over time. For instance, as shown in Figure 1, although an influencer does not receive many likes in the most recent time period, he/she may receive many number of likes in the future if he/she was used to get great attention in the past. Moreover, analyzing time-varying behavior patterns can provide more evidence on the robustness of an influencer. For example, the unstable performance (or effectiveness) of an influencer over time may not be desirable even if he/she satisfies the performance in the most recent time period. Hence, taking such time-varying behavior patterns into account for discovering effective influencers is essential. However, most of the prior studies only focused on the most recent information without considering the historical patterns of influencers.
In this paper, we propose _InfluencerRank_, a learning framework, that discovers effective influencers in social media by learning historical behavioral patterns of influencers. For comprehensively representing the effectiveness of an influencer, we build a heterogeneous information network that consists of influencers, hashtags, user tags, and image objects used by influencers for each historical time period Yang et al. (2020); Zhang et al. (2019). To learn the complex posting behaviors, social networking, and post characteristics of each influencer, we apply graph convolutional networks (GCNs Kipf and Welling (2016) with well-designed influencer features, thereby deriving the influencer representation at a certain period. Based on the influencer representations over different historical time periods, the attentive recurrent neural network is proposed to learn the sequential and temporal behaviors to derive an ultimate representation. Finally, a learning-to-rank framework ranks a list of influencers to discover the ones who are more effective than others.
We summarize our contributions as follows:
* To the best of our knowledge, this is the first attempt to rank influencers with their effectiveness by learning their historical behavioral patterns in social media marketing. We believe our model can be used for influencer recommendations that help companies to recruit a set of effective influencers to boost the advertising effect in social media.
* The _InfluencerRank_ uses the graph convolutional networks over general social media features to learn the posting behavior of the influencers as well as the characteristics of their posts, thus it can be applied to any social media to discover effective influencers. Besides, recurrent neural networks are also applied to model sequential and historical behaviors of influencers over time. We conduct experiments on a real-world dataset collected from Instagram Kim et al. (2020), one of the most popular social media for influencer marketing Nanji (2017). The results demonstrate that _InfluencerRank_ outperforms other existing methods for identifying effective influencers.
* Our analysis further reveals that the image object nodes have more impact on discovering effective influencers than other types of nodes since they can densely connect influencers thereby removing noises in the network. We also find that user reactions and visual perception of images are important features to find effective influencers.
* We evaluate our proposed model over groups of influencers with different numbers of followers, and highlight that _InfluencerRank_ shows effective and robust ranking performances across various groups of influencers.
## Related Work
### Influence Prediction in Social Networks
To find influuencers in social networks, most studies rely on social media features to measure the influence. For example, the number of followers, posts, reposts, and mentions are well-known metrics to measure the influence of a user Bakshy et al. (2011); Subbian and Melville (2011). Based on the measures, Bakshy et al. (2011) use the regression tree model and Subbian and Melville (2011) aggregate rank results to rank influencers, respectively. Romero et al. (2011) propose the passivity of nodes to measure how likely the information is propagated in the social networks, and then apply the PageRank to rank the users. Liu et al. (2015) consider the time domain over the user trust network in the proposed framework to classify influencers into one of three categories, emerging influencers, holding influencers, and vanishing influencers. In addition to social network features, some studies propose to use machine learning with statistical features. Li et al. (2011) extract network-based, content-based, and user activeness-based statistical features, e.g., the number of followers, and length of posts, to predict the influence of users. Segev et al. (2018) use simple statistics of posts and users, e.g., the number of likes, comments, followers, and posts, to measure the user influence using a regression model. Some previous works, on the other hand, exploit graphical information. Zhang et al. (2015) exploit the social influence locality to predict retweet behaviors. Qiu et al. (2018) utilize mini-batches of sub-graphs and apply the attention mechanism to predict the influence of users on social networks. Chen et al. (2019) propose recurrent convolutional networks to consider temporal effect on information cascade prediction. However, most previous works fail to consider temporal dynamics in the social relationships and characteristics of users.
### Graph Convolutional Recurrent Networks
Graph convolutional networks (GCNs) Kipf and Welling (2016) are the neural network architecture for graph-structured data. GCNs deploy spectral convolutional structures with localized first-order approximations so that the knowledge of both node features and graph structures can be leveraged. However, while real-world data that can be modeled as graphs dynamically changes over time, temporal information cannot be easily captured from GCNs. To learn the temporal dynamics of structural graphs, previous studies suggest to combine GCN and recurrent neural networks (RNNs). Seo et al. (2018) propose the models that (i) stack up graphs to make RNN inputs and (ii) consider convolutions in RNNs, which can learn a sequence of structural information. They find that each model outperforms the other models depending on applications such as video prediction and natural language modeling. Pareja et al. (2020) propose another approach to capture graph dynamics. Instead of using a sequence of graph embedding as inputs of RNN, they first use RNN to acquire the knowledge of network parameter dynamics. This approach can benefit in a case where a node dynamically appears and disappears. This paper proposes to apply attentive recurrent networks over the temporal node representations by taking a heterogeneous network that consists of influencers, image objects, hashtags, and user tags over time. Our model can effectively learn temporal graph representations by estimating the importance of hidden states for certain time periods.
## Problem Statement
In this section, we formally define the effectiveness metric of an influencer and then formulate the problem of discovering effective influencers.
**Definition 1**.: _Engagement rate is a widely-used metric in influencer marketing that shows how much audiences actively engage with an influencer De Veirman, Cauberghe, and Hudders (2017); Kim et al. (2021); Kim and Han (2020); Lou and Yuan (2019); Comcovich (2018). Given an influencer \(u\), the engagement rate of the influencer at time \(t\) is calculated as follows:_
\[E_{u}^{t}=\frac{l_{u}^{t}}{f_{u}} \tag{1}\]
_where \(f_{u}\) is the number of followers who follow the influencer \(u\) and \(l_{u}^{t}\) is the average number of likes on content posted by the influencer \(u\) at timestamp \(t\)._
Based on the definition of influencer effectiveness, we introduce the influencer ranking problem. Let \(U\) be the set of influencers. For each timestamp \(t\), we suppose that an influencer \(u\) has published a set of posts \(P_{u}^{t}\). Given the set of influencers \(U\) and their posts published until time \(k\), \(\{P_{u}^{t}\mid 1\leq t\leq k\}\), the goal of this work is to discover influencers with high engagement rates at time \(k\) by ranking all influencers \(u\in U\) so that \(E_{u_{i}}^{k}\) is greater than \(E_{u_{j}}^{k}\) if the influencer \(u_{i}\) is ranked higher than the influencer \(u_{j}\).
## Influencer Ranking Model Framework
In this section, we propose InfluencerRank that learns the temporal dynamics of the engagement rates of influencers to automatically discover highly effective influencers. Figure 2 shows the overall framework of the proposed InfluencerRank. The framework takes a series of influencer social networks as input, where each network is composed of influencers and different entities, including but not limited to image objects, hashtags, and other users in social media. The graph convolutional networks (GCNs) are then applied to the input social networks to derive appropriate node representations that capture social relationships and posting characteristics of influencers at a certain time. The GCN-encoded representations across different times are then fed into a recurrent neural network to learn from the sequence of the node representations. The attention mechanism is then applied to the whole sequence of representations to finally derive the effectiveness scores of candidate influencers and rank them for discovering effective influencers.
### Heterogeneous Information Networks
To represent the dynamics of the engagement rates on a sequence of time, we build \(k\) heterogeneous networks \(\mathbb{G}=\{G_{1},G_{2},\cdots,G_{k}\}\) based on the influencers and other relevant entities. Hence, \(G_{t}\) can further characterize the relationships of influencers and their posting behaviors at time \(t\).
\begin{table}
\begin{tabular}{|c|p{142.3pt}|} \hline Notation & Description \\ \hline \hline \(E_{u}^{t}\) & the engagement rate of an influencer \(u\) at time \(t\). \\ \(l_{u}^{t}\) & the average number of engagements on contents posted by the influencer \(u\) at time \(t\). \\ \(f_{u}^{t}\) & the number of followers for an influencer \(u\) at time \(t\). \\ \hline \(U\) & the set of influencers. \\ \(P_{u}^{t}\) & the posts published by the influencer \(u\) at time \(t\). \\ \(G_{t}\) & the heterogeneous network for time \(t\) with the node features \(X_{t}\) and the adjacency matrix \(A_{t}\). \\ \hline \(\mathbf{\hat{A}_{t}}\) & Normalized adjacency matrix transformed from \(\mathbf{A_{t}}\). \\ \(d\) & the number of dimensions for embedded node features. \\ \(\mathbf{D}\) & the diagonal degree matrix of \(\mathbf{A_{t}}\). \\ \(r\) & the number of hidden dimensions in GCNs. \\ \(\mathbf{F^{(i)}}\) & the outputs of the \(i\)-th GCN layer. \\ \(\mathbf{W^{(i)}}\) & the weight matrix between \(\mathbf{F^{(i)}}\) and \(\mathbf{F^{(i+1)}}\). \\ \(\mathbf{R_{t}}\) & the GCN-encoded representation for time \(t\). \\ \hline \(\mathbf{H_{t}}\) & the hidden states in the RNN for time \(t\). \\ \(\mathbf{S}\) & the list of hidden states in the RNN over time. \\ \(\tau_{t}\) & the importance weight for \(\mathbf{H_{t}}\). \\ \(\mathcal{F}_{a}(\cdot)\) & the fully-connected layer for deriving \(\tau_{t}\). \\ \(\alpha_{t}\) & the normalized importance weight for \(\mathbf{H_{t}}\). \\ \(\mathbf{c_{u}}\) & the final representation of the influencer \(u\). \\ \hline \(\mathbf{\hat{y}_{u}}\) & the predicted engagement score for the influencer \(u\). \\ \(\mathcal{F}(\cdot)\) & the fully-connected layers for inferring \(\mathbf{\hat{y}_{u}}\). \\ \hline \end{tabular}
\end{table}
Table 1: Summary of notations and their descriptions.
#### Heterogeneous Nodes and Embedded Features
We build a heterogeneous network \(G_{t}\) for time \(t\) with four different types of nodes, including influences, hashtags, image objects, and other users in social media. Given an influencer \(u\), we extract all of the hashtags \(\{h_{i}\}_{i=1}^{a}\in H\) and mentioned users (i.e., user tags) \(\{v_{j}\}_{j=1}^{b}\in V\) from posts \(P_{u}^{t}\), where \(a\) and \(b\) indicate the number of extracted hashtags and mentioned users, respectively. Note that the mentioned users can be either influencers, brands, or other normal users. In addition, the categories of objects shown in the posted images \(\{o_{k}\}_{k=1}^{c}\in O\) are also considered as nodes. Since each type of node has unique features, we denote the node features of influencers, mentioned users, hashtags, and object categories in images as \(\mathbf{X_{t}^{U}}\), \(\mathbf{X_{t}^{V}}\), \(\mathbf{X_{t}^{H}}\), and \(\mathbf{X_{t}^{O}}\), respectively. We then represent embedded features of each node as \(\mathbf{X_{t}}=[\mathbf{X_{t}^{U}};\mathbf{X_{t}^{V}};\mathbf{X_{t}^{H}};\mathbf{X_{t}^{O}}]\in \mathbb{R}^{N\times d}\), where \(N\) is the total number of all four types of nodes and \(d\) is the number of embedded node features.
#### Edge Construction and Adjacency Matrix
The edges in the heterogeneous network indicate the interactions between entities behind nodes. For example, if an influencer mentioned the hashtag #makeup and posted an image of cosmetic products, the influencer node will be connected to the node of the #makeup hashtag and the node of the cosmetic image object. Given a timestamp \(t\), we make a sparse adjacency matrix \(\mathbf{A_{t}}\in\mathbb{R}^{N\times N}\), where \(A_{ij}^{t}=1\) indicates a connection between the \(i\)-th and \(j\)-th nodes.
Finally, a set of \(k\) heterogeneous networks \(\mathbb{G}\) with the sets of node features and adjacency matrices can be constructed as follows:
\[\mathbb{G}=\{G_{1},G_{2},\cdots,G_{k}\},\]
where \(G_{t}=\mathbf{(X_{t},A_{t})}\) indicates both the node embedded features \(\mathbf{X_{t}}\) and the heterogeneous network structure \(A_{t}\) at time \(t\).
### Graph Convolutional Networks
For the heterogeneous network \(G_{t}\) of each time \(t\), our proposed InfluencerRank applies Graph Convolutional Networks (GCNs) [10] to generate node representations over time. GCNs first generate a normalized adjacency matrix \(\mathbf{\hat{A_{t}}}\) by transforming the adjacency matrix \(\mathbf{A_{t}}\) with the diagonal degree matrix \(D\) as \(\mathbf{\hat{A_{t}}}=\mathbf{D^{-\frac{1}{2}}}A_{t}\mathbf{D^{-\frac{1}{2}}}\). GCNs then stack multiple GCN layers where each layer takes outputs of the previous layer and performs nonlinear transformation to propagate information through different layers. The \(i\)-th layer in GCNs then outputs \(\mathbf{F^{(i)}}\in\mathbb{R}^{N\times r}\) as follows:
\[\mathbf{F^{(i)}}=\sigma\left(\mathbf{\hat{A_{t}}}\mathbf{F^{(i-1)}}\mathbf{W^{(i-1)}}\right),\]
where \(r\) is the number of hidden dimensions in GCNs, \(\mathbf{F^{(i-1)}}\) is the outputs of the previous layer, \(\mathbf{W^{(i-1)}}\) is a matrix of trainable weights, and \(\sigma(\cdot)\) is a nonlinear activation function. We use \(\mathbf{X_{t}}\) for \(\mathbf{F^{(0)}}\) as the input of the first GCN layer. The final output of the GCNs \(\mathbf{R_{t}}\) at time \(t\) can be represented as follows:
\[\mathbf{R_{t}}=\left[\mathbf{F^{(1)}},\mathbf{F^{(2)}},\dots,\mathbf{F^{(e)}}\right],\]
where \(e\) is the number of layers in GCNs.
Finally, we can obtain a sequence of GCN-encoded node representations, \([\mathbf{R_{1}},\dots,\mathbf{R_{k}}]\), to implicitly represent the knowledge about influencers over time.
### Attentive Recurrent Neural Networks
#### Learning Graph Dynamics
Based on the sequence of GCN-encoded node representations, \([\mathbf{R_{1}},\dots,\mathbf{R_{k}}]\), InfluencerRank applies Recurrent Neural Networks (RNNs) to the model framework. More specifically, we employ Gated Recurrent Units (GRUs) [13], which use update gate and reset gate inside the unit to carry information flow over many time periods, to capture long-term temporal dependencies from the heterogeneous networks. Each GRU takes hidden states from the previous unit and the GCN representations as input and then outputs hidden states of the current time. More formally, the hidden states at time \(t\), \(H_{t}\) is computed as follows:
\[H_{t}=(1-z_{t})H_{t-1}+z_{t}\tilde{H_{t}} \tag{2}\]
Figure 2: The overall framework of the proposed _InfluencerRank_.
where \(z_{t}\) is an update gate at time \(t\) and \(\tilde{H_{t}}\) is the candidate state at time \(t\). The candidate state is updated as follows:
\[\tilde{H_{t}}=tanh(W\cdot\left[r_{t}\odot H_{t-1},R_{t}\right]) \tag{3}\]
where \(r_{t}\) is an reset gate at time \(t\), \(\odot\) is an element-wise multiplication, and \(R_{t}\) is the GCN representations at time \(t\). Finally, InfluencerRank obtains the whole states of GRUs as follows:
\[\mathbf{S}=\left[\mathbf{H_{1}},\mathbf{H_{2}},\ldots,\mathbf{H_{k}}\right].\]
Attention over TimeTo acquire the final influencer representations, InfluencerRank applies the attention mechanism [1] to the whole state embeddings derived from GRUs \(\mathbf{S}\). The attention mechanism allows InfluencerRank to learn the dynamics of the engagement rates by taking into account the importance of time periods.
For each timestep \(t\), InfluencerRank estimates the importance weight of the corresponding state embedding by applying a projection as:
\[\tau_{t}=\tanh\left(\mathcal{F}_{a}\left(\mathbf{H_{t}}\right)\right) \tag{4}\]
where \(\mathcal{F}_{a}(\cdot)\) is a fully-connected layer; \(tanh(\cdot)\) is the activation function. We then compute the weights of each timestep by using a softmax function as:
\[\alpha_{t}=\frac{\exp(\tau_{t})}{\sum_{i=1}^{k}\exp(\tau_{i})} \tag{5}\]
Finally, InfluencerRank derives the ultimate representation of candidate influencers by using the weighted sum as follows:
\[\mathbf{c}=\sum_{i=1}^{k}\alpha_{i}\cdot\mathbf{H_{i}} \tag{6}\]
### Engagement Score Estimation
For an influencer \(u\), InfluencerRank takes the corresponding ultimate representation \(\mathbf{c_{u}}\) as the input and then predicts an engagement score \(\hat{y_{u}}\) that is proportional to the engagement rate \(E_{u}^{k}\) as follows:
\[\mathbf{\hat{y_{u}}}=\mathcal{F}_{c}\left(\text{ReLU}\left(\mathcal{F}_{b}\left( \mathbf{c_{u}}\right)\right)\right) \tag{7}\]
where a non-linear transformation is carried out in a fully-connected layer \(\mathcal{F}_{b}(\cdot)\) with the ReLU activation function and the engagement rate is estimated in another fully-connected layer \(\mathcal{F}_{c}(\cdot)\).
List-wise Ranking and OptimizationInfluencerRank treats the task as a ranking problem and optimizes the ranking performance with a list-wise learning-to-rank framework [13]. Suppose \(Z\) is the set of features for influencers to be ranked; \(Y\) is the space of all possible rankings. During training, we sample \(m\) labeled influencers from the whole training space as an i.i.d. candidate ranked list \(S=\left\{\left(Z_{i},\mathbf{y_{i}}\right)\right\}_{i=1}^{m}\sim P_{ZY}\), where \(P_{ZY}\) is the unknown target joint probability distribution of \(Z\) and \(Y\). Therefore, the corresponding loss \(\mathcal{L}_{S}\) can be considered as:
\[\mathcal{L}_{S}(\mathbf{\hat{y}})=\frac{1}{m}\sum_{i=1}^{m}\mathbf{l}(\mathbf{\hat{y}}( \mathbf{Z_{i}}),\mathbf{y_{i}}) \tag{8}\]
where \(\mathbf{l}(\mathbf{\hat{y}}(\mathbf{Z_{i}}),\mathbf{y})\) is the 0-1 loss between \(\mathbf{\hat{y}}(\mathbf{Z_{i}})\) and the rank in \(\mathbf{y}\); \(\mathbf{y_{i}}\) denotes the ground-truth ranking such that
\[L(\mathbf{\hat{y}}(\mathbf{Z_{i}}),\mathbf{y})=\left\{\begin{array}{ll}1&,\;\text{if} \;\mathbf{\hat{y}}(\mathbf{Z_{i}})\neq\mathbf{y}\\ 0&,\;\text{if}\;\mathbf{\hat{y}}(\mathbf{Z_{i}})=\mathbf{y}\end{array}\right. \tag{9}\]
### Node Features
In this subsection, we describe node features in the heterogeneous network. To understand the relationship between the engagement rate of an influencer and the characteristics of the corresponding influencer, we introduce six types of node features, including node type, profile, image, text, posting, and reaction features.Note that most of the features are only applicable for influencer nodes while the remaining nodes (e.g., hashtags, image objects) hold zeros for the inapplicable features. For the feature engineering, we deploy the average, median, minimum, and maximum values for the features that need to be aggregated with statistics.
Here, we briefly introduce the six categories of features used in this paper as follows.
* **Node type features.** The one-hot coded feature that indicates one of the four node types, including influencers, other users, hashtags, and image objects.
* **Profile features.** For each influencer node, we exploit the numbers of followers, followees, and posts which are the most commonly used metrics to measure user influence in social networks [10]. Additionally, we consider a category of influencers from eight influencer categories defined in the previous study [13].
* **Image features.** The previous study [14] showed that the characteristics of images on social media posts affect its popularity. In addition to the image objects which are considered as nodes in the heterogeneous network, we add the attributes of visual perception of the images to understand how influencers create images. We compute the brightness, colorfulness [12], and color temperature of the posted images based on their RGB values.
* **Text features.** To understand how textual usage of influencers affects the engagement rate, we retrieve various text features. More specifically, we use the numbers of hashtags, user tags, and emojis that are widely used functions on social media, and the length of captions which can represent how much detailed information is in the caption [15]. Moreover, we also calculate the sentiment scores of captions to learn how positive or negative emotions are carried through the captions by using VADER [14].
* **Posting features.** The features in this category can provide information about how influencers use social media from various aspects. We first exploit the portion of the number of posts in one of the ten post categories [13] to the total number of posts to understand the posting behavior of influencers. In addition to the post category rate feature, we also examine the portion of the number of advertising posts to the total posts published by an influencer; posting too many paid advertisements
can show negative impacts on the popularity Evans et al. (2017); Yang et al. (2019). We also consider the feedback rate and posting interval, which are the measures of the interaction with their followers and activeness, respectively. The feedback rate is calculated as the ratio of the number of posts that contain the influencers' responses to the user comments to the number of total posts. The posting interval is the average time gap between posts that are in chronological order.
* **Reaction features.** We use the user comments to generate the user reaction feature. Specifically, we compute the sentiment scores of comments that are written by audiences of the influencers' posts. Note that we do not consider the number of likes and comments as node features since it can directly imply the engagement rates of influencers.
## Experiments
In this section, we conduct experiments to evaluate the performance of InfluencerRank compared with other baseline methods. We also analyze the experimental results to understand the importance of each feature to find influencers with high engagement rates.
### Experimental Dataset
Dataset ConstructionTo evaluate the proposed _InfluencerRank_, we use the Instagram influencer dataset Kim et al. (2020). The dataset includes profiles of influencers, and their posts, including both images and all meta-data. We only keep the posts that were published in the range of January 1st, 2017 and December 31st, 2017, to build temporal influencer networks. As a result, the dataset consists of 18,397 influencers and 2,952,075 posts. For the experiments, we split the dataset into the training dataset, which contains posts from January to November, and the testing dataset that contains posts published in December.
Heterogeneous Network ConstructionTo build the temporal heterogeneous networks, we first divide the whole dataset into 12 subsets by one-month intervals. Note that we conduct experiments to analyze ranking performances across different temporal window sizes, thereby having the proper time intervals. We then extract all hashtags and user tags from the post captions and detect objects from the images. As a consequence, 1,151,082 unique hashtag nodes, 532,468 other user nodes, and 1,000 image object nodes are found across the networks and connected to the corresponding influencer nodes. To further reduce noises in the dataset, we remove every auxiliary node (i.e., hashtags, other users, and image objects) with only a single edge while edges with normalized frequencies less than 0.01 are also discarded. After the pruning process, 18,397 influencers, 20,744 other users, 67,695 hashtags, and 996 image objects are in the networks (i.e., 107,832 nodes), and a total of 15,090,225 edges remain across the networks.
### Experimental Settings
Evaluation MetricsBased on the definition, we first compute the engagement rates for all influencers in across all timesteps as the ground truths. Note that the average engagement rate is 0.038 and the median engagement rate is 0.029. We utilize two metrics to evaluate the performance of ranking influencers.
* _Normalized Discounted Cumulative Gain_ (NDCG) Jarvelin and Kekalainen (2017): First, we divide all of the influencers into six groups with different thresholds on the engagement rates and relevance levels from 0 to 5. Table 2 further shows the statistics of influencers in the dataset across different relevance levels and criteria for the engagement rates. We then treat the relevance levels as ground truths to evaluate the ranking performance with the metric of NDCG.
* _Rank-Biased Precision_ (RBP) Moffat and Zobel (2008): To avoid losing valuable information while converting the engagement rates to the six relevance levels, we directly use the engagement rates with the metric of RBP. We set the probability \(p\) as 0.95 to measure rank quality.
* _Ranking baselines_: We consider two ranking models, _ListNet_ (LN) [10] and _LambdaMART_ (LM) [22]. The ranking baseline methods use the same features as our proposed model.
* _Model baselines_: The baseline methods in this category implement graph neural networks (GNNs) based learning models. Note that the applied features of _InfluencerRank_ and graph baselines are identical so that we can fairly evaluate the model novelty and capability for the ranking task. We develop GCRN [10], DeepInf [11], CasCN [1], and EGCN [1].
### Experimental Results
Table 3 shows RBP, NDCG scores, and training time of InfluencerRank and the nine baseline methods for discovering influencers with high engagement rates. All of the three methods in the user baselines, which exploit the social media features, obtain low ranking results. This suggests that only considering social media features is insufficient to discover effective influencers. The ranking baseline methods, on the other hand, show better ranking performance compared to the user baseline methods since they use our proposed features. It demonstrates that our proposed features are very useful to capture the characteristics of influencers, thereby discovering effective influencers. Next, most of the graph baseline methods outperform the user baselines and ranking baselines. More specifically, among the graph baseline methods, GCRN [10] shows limited ranking performance improvement since it only resorts to temporal-spatial structures of graphs without taking into account the node features. DeepInf [11] demonstrates better ranking performance than GCRN by exploiting the graph convolutional networks that take advantage of the network structures of different entities while the features in different aspects provide sufficient knowledge to describe both influencers and other entities in the graph. Both CasCN [1] and EGCN [1] further improve performance by applying recurrent neural networks to adjacency matrices of temporal graphs. This suggests that the learning dynamics of graph structures with node features over time is beneficial in discovering effective influencers.
Finally, our proposed approach, InfluencerRank, outperforms all of the baseline methods. This is because our model derives informative influencer representations over time by using the graph convolutional networks and the attentive neural network, and effectively learns the dynamics of influencer characteristics and engagement rates. The results also show that InfluencerRank is able to learn the latent influencer representations in a reasonable amount of training time compared to the other baseline methods. CasCN and EGCN, on the other hand, have significantly longer training time than InfluencerRank. This is probably because the proposed framework successfully learns the importance of hidden states in the RNNs by applying attention while other baseline methods combine RNNs with GCNs without taking the importance of each temporal graph into account.
## Analysis and Discussions
In this section, we conduct six analyses to understand the importance of (i) the temporal window size, (ii) the temporal information, (iii) the model components, (iv) the type of RNNs, (v) the heterogeneous networks, and (vi) the input features. We then evaluate the performance of InfluencerRank on various sets of influencers which are grouped by the size of audiences.
### Analysis on Temporal Window Size
We first investigate the effect of different temporal window sizes for heterogeneous network construction. To that end, we split the training dataset which has posts in 11 months period into sub-datasets by five different temporal window sizes including 1 week, 2 weeks, 1 month, 2 months, and 3 months. Note that we use the same testing dataset across the five different window sizes for consistent performance comparison. The RBP and NDCG@200 scores of the InfluencerRank over the different temporal window sizes are shown in Figure 3. We find that the model trained with the networks divided by 1-month intervals shows the best ranking performance whereas the model trained with the 1-week temporal window has the lowest ranking scores. InfluencerRank loses 5.7% performance on NDCG when the model is trained with the 1-week window size compared to the model trained with the 1-month window size. This suggests that the heterogeneous networks of the models, which are trained with temporal window size shorter than 1-month, have insufficient information to learn the dynamics of engagement
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{_RBP_} & \multicolumn{6}{c|}{_NDCG@K_} & Time \\ \cline{3-8} & & 1 & 10 & 50 & 100 & 200 & (sec) \\ \hline \hline UP & 0.025 & 0.800 & 0.436 & 0.413 & 0.406 & 0.368 & 347 \\ PP & 0.028 & 1.000 & 0.519 & 0.465 & 0.442 & 0.425 & **295** \\ UA & 0.024 & 0.800 & 0.518 & 0.494 & 0.438 & 0.436 & 330 \\ \hline LN & 0.026 & 1.000 & 0.610 & 0.511 & 0.465 & 0.441 & 481 \\ LM & 0.031 & 1.000 & 0.648 & 0.546 & 0.493 & 0.477 & 563 \\ \hline _GCRN_ & 0.028 & 1.000 & 0.629 & 0.557 & 0.513 & 0.467 & 612 \\ _DeepInf_ & 0.031 & 1.000 & 0.697 & 0.567 & 0.549 & 0.512 & 525 \\ _CasCN_ & 0.033 & 1.000 & 0.751 & 0.645 & 0.572 & 0.543 & 1109 \\ _EGCN_ & 0.038 & 1.000 & 0.812 & 0.679 & 0.616 & 0.577 & 1483 \\ \hline _InfluencerRank_ & **0.043** & **1.000** & **0.864** & **0.720** & **0.661** & **0.614** & 648 \\ \hline \end{tabular}
\end{table}
Table 3: _RBP_, _NDCG@K_ scores, and training time of the proposed InfluencerRank and the nine baseline methods.
rates. We also observe that the ranking performance gradually decreases while we use the longer temporal window size. This implies that the model trained with a large window size fails to take into account the variance of engagements by using the average like counts of all posts in each sub-dataset. The analysis results also demonstrate that learning the temporal dynamics of the engagement rates is very important to find effective influencers.
### Analysis on Temporal Information
We next evaluate the ranking performance of the proposed model by using the different number of temporal input networks for training the model. Figure 4 shows RBP and NDCG@200 scores of the InfluencerRank over the number of temporal graphs. Note that the model uses the most recent temporal graphs. For example, a model trained with two temporal graphs learns two networks in October and November in our dataset. We observe that the performance significantly drops when InfluencerRank obtains insufficient historical information. InfluencerRank loses 15% performance on NDCG if the model uses only one graph compared to the model that considers all temporal graphs. The result confirms that only considering the most recent network degrades the performance since the engagement rates of influencers vary over time. We also find that as the number of temporal networks increases, the model has gradually less performance gain.
### Analysis on Model Components
The proposed model consists of three major components, including the graph convolutional networks, the recurrent neural networks, and the attention network. We conduct an ablation study by excluding each component from the model framework to understand the importance of model components. Figure 5 shows the performance losses of NDCG@200 scores over the three model components. We find that the model which excludes the RNN component has significant performance loss compared to the full model. This suggests that disregarding to learn sequential temporal information leads to performance degradation since engagement rates of an influencer change over time. The model that discards the GCN component also shows large performance loss. This is because the model fails to learn structural information with embedded node features. This demonstrates that learning social relationships of influencers with other users, tags, and image objects plays an important role in discovering effective influencers. We observe that the attention component has relatively less impact on the performance than other model components whilst it still enhances the model by considering the importance of temporal graph embeddings.
### Analysis on Recurrent Neural Networks
In the proposed InfluencerRank framework, we employ gated recurrent units (GRUs) [13] for the recurrent neural networks. However, the GRU can be replaced with a long short-term memory (LSTM) [1]. To make a design decision which recurrent architecture to employ, we train InfluencerRank with GRU and LSTM. Figure 6 shows NDCG@200 scores and training times of InfluencerRank with two RNN architectures on different number of temporal graphs. We observe that no significant difference in the NDCG scores of models using GRU and LSTM, but the model with GRU tends to have slightly higher scores. The results also show that InfluencerRank with GRU has shorter training times than LSTM across the different number of temporal graphs. More specifically, the time difference gradually increases as the number
Figure 4: Ranking performance on different lengths of timestamps. InfluencerRank achieves higher ranking scores with longer history.
Figure 5: Performance losses after removing each of the model components. The RNNs are the most important component to discover effective influencers.
Figure 3: Ranking performance over different temporal window sizes. InfluencerRank trained with the 1-month window size shows the best performance.
of temporal graphs for training increases. Note that GRU is 1% faster than LSTM when the model only takes one temporal graph and 3.4% faster when the model uses 11 graphs for training. GRU shows better performance than LSTM in our task and that is probably because GRU has simpler network than LSTM and also benefits from the short input sequence length.
### Analysis on Heterogeneous Networks
We study the importance of the proposed heterogeneous network to find effective influencers. To understand the importance of individual auxiliary node type, we train InfluencerRank with the network without the type of auxiliary node. Table 4 shows the RBP and the NDGC scores of InfluencerRank with different types of networks. The results show that the model trained with all types of nodes achieve higher RBP and NDCG scores than the other models that exclude a type of auxiliary nodes. This confirms that the graphical structure in InfluencerRank helps improve performance in finding effective influencers. We also observe that NDCG scores of the model without the image object nodes are lower than that of the model excluding hashtags and other user nodes. Note that excluding the image object nodes drops the performance of NDCG@200 by 4.1%, whereas excluding hashtags and other user nodes only drops the score by 2.8% and 1.9%. This is probably because each image object node can densely connect a large number of similar influencers together as it has a greater number of edges than a hashtag node and a user node.
### Analysis on Node Features
The benefit of using GCNs comes from considering network structure information with node features. To understand the importance of node features, we first evaluate the performance of the model that excludes all node categories. The model without the whole node features significantly drops the ranking quality; the loss of NDGC@200 of the model without node features is 21.99%.
We then investigate the performance of InfluencerRank with variant sets of node features to study the importance of each category of the node features.Figure 7 shows the performance loss of NDCG scores of the models trained with the node features excluding one particular node category against the full model as the leave-one-out analysis. The results reveal that the reaction feature category, which contains the sentiment scores of user comments on the posts, is more important than other categories to identify effective influencers. This indicates that the audience may show distinct reactions to influencers with high engagement rates. The image category, which includes the visual perception of images (e.g., brightness, colorfulness), also has higher loss values than other node feature categories. This suggests that influencers with high engagement rates may have different visual characteristics from other influencers. On the other hand, the text category, including the number of hashtags, user tags, emojis in a caption, and the sentiment scores of the caption, have the least impact to discover effective influencers. Although the statistical features to represent textual characteristics of influencers' posts have less impact than other features, InfluencerRank can improve the ranking performance by taking hashtags and user tags into account to the network structure.
### Influencer Follower Size
In the influencer marketing industry, influencers are often divided into subgroups by the number of followers since it directly refers to the size of potential customers and hiring
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline \multirow{2}{*}{Node Type} & \multirow{2}{*}{_RBP_} & \multicolumn{6}{c|}{_NDCG@K_} \\ \cline{3-7} & & 1 & 10 & 50 & 100 & 200 \\ \hline \hline All nodes & 0.043 & 1.000 & 0.864 & 0.720 & 0.661 & 0.614 \\ \hline \(\mathbf{U},\mathbf{V},\mathbf{H}\) & 0.039 & 1.000 & 0.848 & 0.704 & 0.648 & 0.589 \\ \hline \(\mathbf{U},\mathbf{H},\mathbf{O}\) & 0.040 & 1.000 & 0.863 & 0.716 & 0.654 & 0.602 \\ \hline \(\mathbf{U},\mathbf{V},\mathbf{O}\) & 0.037 & 1.000 & 0.861 & 0.708 & 0.657 & 0.597 \\ \hline \end{tabular}
\end{table}
Table 4: Rank evaluation on the different network structures. Note that \(\mathbf{U}\), \(\mathbf{V}\), \(\mathbf{H}\), and \(\mathbf{O}\) represent influencer, other mentioned user, hashtag, and image object nodes in the network, respectively.
Figure 6: NDCG scores and training times of InfluencerRank trained with GRU and LSTM on different number of temporal graphs. InfluencerRank with GRU tends to have better ranking performance and shorter training time than the model with LSTM.
Figure 7: Performance losses of NDCG@200 over node feature categories. The image features which represent visual perception and the reaction features which include sentiment scores of user comments have more impact on the effective influencer discovery than other types of features.
cost [3]. For example, companies with a sufficient marketing budget can hire influencers who are followed by millions of people while small retailers may collaborate with influencers with a small number of followers. Therefore, we evaluate the performance of InfluencerRank over groups of influencers with different sizes of followers. Although there are no standard criteria to classify influencers based on the number of followers, we utilize the following thresholds which are the generally accepted numbers to divide the influencers into three groups1. Influencers who are followed by less than 20,000 followers are classified as the _Micro influencers_. The _Mid-level influencers_ have followers between 20,000 and 100,000, and _Macro influencers_ have more than 100,000 followers. In our dataset, around 30% of influencers are the micro-influencers, 45% of them are the mid-level influencers, and the remaining 25% influencers are the macro-influencers. To evaluate the performance under the same conditions, we randomly select multiple sets of 1,000 influencers from each category and run the experiment 10 times.
Footnote 1: [http://www.mattr.co/pros-cons-micro-macro-mid-level-influencers/](http://www.mattr.co/pros-cons-micro-macro-mid-level-influencers/)
Figure 8 shows the average NDCG scores of _InfluencerRank_ and four baseline methods, including GCRN [3], DeepInf [14], CasCN [1], and EGCN [1] over the micro, mid-level, and macro-influencers. The results show that the proposed model has robust performance to discover effective influencers in the groups of all ranges of followers compared to the baseline methods. More specifically, DeepInf [14] fails to discover effective micro-influencers. This is probably because DeepInf disregards the temporal information which is critical to find micro-influencers who have relatively large variance on their features and engagement rates over time compared to macro-influencers who are matured. On the other hand, our proposed model can accurately find highly effective micro-influencers since their unique features are captured by sequential learning of temporal information.
## Conclusion
In this paper, we propose a ranking model to discover influencers with high engagement rates by learning temporal dynamics of their posting behaviors. To represent the characteristics of influencers and their posting behaviors at each time period, we build a heterogeneous network that consists of influencers and social media elements as nodes, such as hashtags, user tags, and image objects. Moreover, each node can be associated with context features in six categories, including node type, profile, image, text, posting, and reaction features. Based on the GCN-encoded representations of influencers at each timestamp, our proposed model applies attentive RNNs to model historical behaviors of influencers, thereby accurately ranking influencers by their engagement rate scores. The results of the extensive experiments show that InfluencerRank outperforms existing baseline methods.
### Broader Impact and Ethical Considerations
The utility of our proposed framework is expected to significantly increase given a decision of Instagram, one of the most popular influencer marketing platform, that considers to hide the number of likes on each post [10, 11] to help mental health issues of social media users [12]. Unlike prior work, the number of likes is not used in discovering influencers in _InfluencerRank_, hence our proposed model can be particularly used by brands with relatively small business sizes, who may be suffering from the heavy expense of discovering effective influencers among millions of candidates [10] in a situation where the number of likes is hidden from other users. Additionally, our model is also capable of adopting additional node features and node types in the network for further improvements. As a result, we believe our model can be widely exploited in finding highly effective influencers for businesses from small retailers to global brands.
|
2306.09481 | Leveraging Residue Number System for Designing High-Precision Analog
Deep Neural Network Accelerators | Achieving high accuracy, while maintaining good energy efficiency, in analog
DNN accelerators is challenging as high-precision data converters are
expensive. In this paper, we overcome this challenge by using the residue
number system (RNS) to compose high-precision operations from multiple
low-precision operations. This enables us to eliminate the information loss
caused by the limited precision of the ADCs. Our study shows that RNS can
achieve 99% FP32 accuracy for state-of-the-art DNN inference using data
converters with only $6$-bit precision. We propose using redundant RNS to
achieve a fault-tolerant analog accelerator. In addition, we show that RNS can
reduce the energy consumption of the data converters within an analog
accelerator by several orders of magnitude compared to a regular fixed-point
approach. | Cansu Demirkiran, Rashmi Agrawal, Vijay Janapa Reddi, Darius Bunandar, Ajay Joshi | 2023-06-15T20:24:18Z | http://arxiv.org/abs/2306.09481v1 | Leveraging Residue Number System for Designing High-Precision Analog Deep Neural Network Accelerators
###### Abstract
Achieving high accuracy, while maintaining good energy efficiency, in analog DNN accelerators is challenging as high-precision data converters are expensive. In this paper, we overcome this challenge by using the residue number system (RNS) to compose high-precision operations from multiple low-precision operations. This enables us to eliminate the information loss caused by the limited precision of the ADCs. Our study shows that RNS can achieve \(99\%\) FP32 accuracy for state-of-the-art DNN inference using data converters with only \(6\)-bit precision. We propose using redundant RNS to achieve a fault-tolerant analog accelerator. In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach.
analog design, accelerators, residue number system, deep neural networks
## I Introduction
Deep Neural Networks (DNNs) are commonly used today in a variety of applications including financial, healthcare, and transportation. The pervasive usage of these DNN models, whose sizes are continuously increasing, forces us to use more compute, communication, and memory resources. Unfortunately, with Moore's Law and Dennard Scaling slowing down [1], we can no longer rely on technology scaling. As a result, these ever-growing DNNs come with higher and higher costs in terms of energy, time, money, and environmental impact.
For performing efficient DNN inference, several works have explored analog accelerator designs. These analog designs accelerate general matrix-matrix multiplication (GEMM) operations that make up more than \(90\%\) of the operations in DNN inference. The prior art have explored several analog technologies such as photonic cores [2, 3, 4, 5, 6],, resistive arrays [7, 8, 9, 10, 11], switched capacitor arrays [12, 13], PCM [14], STT-RAM [15, 16], etc., to enable highly parallel, fast, and efficient matrix-vector multiplications (MVMs) in the analog domain.
The success of this analog approach is, however, constrained by the limited precision of the digital-to-analog and analog-to-digital data converters (i.e., DACs and ADCs) which typically dominate the energy consumption in analog accelerators [17, 9, 18]. Concretely, during MVM, a dot product between a \(b_{\text{w}}\)-bit signed weight vector and a \(b_{\text{in}}\)-bit signed input vector--each with \(h\) elements--produces an output scalar with \(b_{\text{out}}\) bits of information, where \(b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1\). Therefore, an ADC with precision greater than \(b_{\text{out}}\) is required to guarantee no loss of information when capturing the output. Unfortunately, the energy consumption of ADCs increases exponentially with the effective number of bits (ENOB) (roughly \(4\times\) increase for each additional output bit) [19]. So for energy-efficient analog accelerator designs, it is typical to use ADCs with a lower precision (\(b_{\text{ADC}}<b_{\text{out}}\)) and only read the most significant bits (MSBs) of the output [17].
Rekhi et al. [17] showed that while this limited precision does not cause a significant accuracy drop in small networks with small datasets, it can drastically degrade the accuracy in larger networks. To better understand this, we analyzed (see Fig. 1) how the precision of DACs (i.e., \(b_{\text{in}}\) and \(b_{w}\)) and ADCs (i.e., \(b_{\text{ADC}}\)), and the number of elements within a single vector (i.e., \(h\)) affect the accuracy in two tasks: (1) a two-layer convolutional neural network (CNN) for classifying the MNIST dataset [20]: a simpler task with only 10 classes, and (2) ResNet50 [21], a CNN with 50 layers, for classifying the ImageNet dataset [22]: a more difficult task with 1000 classes. As the vector size \(h\) increases, \(b_{\text{out}}\) increases, therefore, higher precision is needed to maintain the accuracy. At all bit precisions, the accuracy of ResNet50 degrades at smaller values of \(h\) than the two-layer CNN. Essentially, to efficiently execute large DNNs using analog accelerators while maintaining high accuracy, we need a cheaper way to increase the computation precision than simply increasing the bit precision of the data converters.
In this paper, we propose to leverage the residue number system (RNS) to design analog accelerators, to enable the use of low-precision ADCs while performing high-precision arithmetic. We take advantage of the fact that RNS can represent high-precision values using low-precision integer residues for a chosen set of moduli. As RNS is closed under multiplication and addition, a full dot-product can be performed without switching between numeral systems. The modulo operation in the RNS ensures that the bit width of the residues does not grow to be larger than the bit width of the chosen moduli. This keeps the output residues at the same bit precision as the input residues after a dot product. Therefore, the low-precision output residues can be captured by low-precision ADCs without any information loss and eventually be used to generate a high-precision GEMM output.
Fig. 1: Accuracy of a two-layer CNN classifying handwritten digits from the MNIST dataset and ResNet50 classifying images from the ImageNet dataset evaluated in an analog core with varying precision \(b\) and vector sizes \(h\). For all evaluations, \(b\)-bit precision means we set \(b_{\text{in}}=b_{w}=b_{\text{ADC}}=b\).
Our contributions in this work are as follows:
We present a dataflow for a generic analog RNS-based accelerator. We evaluate the effectiveness of the proposed RNS approach by comparing the accuracy of an RNS-based analog hardware against a regular fixed-point-based analog hardware using state-of-the-art MLPerf (datacenters) DNN benchmarks [23]. To improve the fault tolerance of the analog accelerator, we incorporate the Redundant RNS (RRNS) error-correcting code [24] in the analog RNS-based accelerator. We use redundant residues to detect and correct errors at the output of a dot product. We evaluate the fault tolerance of the error-corrected analog accelerator by injecting noise and investigating its effects on the accuracy of the MLPerf benchmarks.
We investigate the energy efficiency advantage of using our RNS approach over regular fixed-point number system in analog accelerators. Our results show that, thanks to the use of low-precision data converters enabled by our RNS-based approach, the energy consumed by data converters (that dominate the energy consumption in analog accelerators) can be reduced by multiple orders of magnitude compared to the same precision regular analog approach.
## II Related Work
Prior works have investigated several analog technologies such as photonic cores [2, 3, 4, 5, 6], resistive arrays [7, 8, 9, 10, 11], switched capacitor arrays [12, 13], PCM cells [14], STT-RAM [15, 16], etc., for DNN acceleration. Many of these analog DNN accelerator works [2, 3, 7] have mainly reported accuracy results only for easy tasks such as classification of small datasets (e.g., MNIST, CIFAR) that work well with very low precision. A few other works [5, 6] used larger and more recent networks in their evaluation but solely focused on hardware performance and architectural design without reporting accuracy with the assumption that 8-bit DACs and ADCs provide adequate precision for high-accuracy DNN inference. However, as shown in Figure 1, this assumption may not be valid for all DNNs and hardware configurations. _In our work, we focus on state-of-the-art networks instead of small tasks that are practically obsolete._
RNS has been used in the analog domain for building optical adders and multipliers for reducing the optical critical path [25] and increasing efficiency in DNNs [4]. However, this requires a number of optical devices that increases quadratically with the modulus value--degrading their efficiency for large moduli. _In our work, the number of analog devices is independent of the size of the moduli._
There also exist digital DNN accelerators [26, 27] that use RNS for energy-efficient computation. These works show that it is possible to stay in the RNS domain (without reverting back to binary or decimal) for the entire inference. However, this approach requires overflow detection and scaling and also uses approximations for non-linear operations. Most state-of-the-art DNNs today comprise a wide variety of non-linear operations. Using approximations and fixed-point-only arithmetic for these operations can severely degrade the accuracy. Therefore, _in our work, we use RNS only for MVM operations and switch back to floating-point arithmetic for non-linear operations._
Bit-partitioned arithmetic [28] is another way to build high-precision results from low-precision operations and eliminate the need for costly high-precision ADCs. Shafiee et al. [9] use an iterative process where 16-bit inputs are represented as 16 1-bit values to eliminate the DACs and reduce the required precision of ADCs. However, their iterative approach takes more than one cycle to compute the dot products. In addition, both the RNS and the bit-partitioning approaches are susceptible to noise as small errors in the residues/partitions grow larger during output reconstruction. Thus, error correction methods are required for such designs. _So in our work, we provide a complete fault-tolerant dataflow that uses redundant RNS to detect and correct errors that are introduced by the noisy behavior of analog designs._
## III RNS-based Analog Core
### _RNS Basics_
The RNS represents an integer as a set of smaller integer residues. These residues are calculated by performing a modulo operation on the said integer using a selected set of \(n\)_co-prime_ moduli. Let \(A\) be an integer. \(A\) can be represented in the RNS with \(n\) residues as \(\{a_{1},\ldots,a_{n}\}\) for a set of moduli \(\mathcal{M}=\{m_{1},\ldots,m_{n}\}\) where \(a_{i}=|A|_{m_{i}}\equiv A\mod m_{i}\) for \(i\in\{1\ldots n\}\)1. \(A\) can be uniquely reconstructed using the Chinese Remainder Theorem (CRT):
Footnote 1: Hereinafter, we refer to the integer \(A\) as the _standard representation_, while we refer to the integers \(\{a_{1},\ldots,a_{n}\}\) simply as the residues.
\[A=\sum_{i=1}^{n}|a_{i}M_{i}T_{i}|_{M}, \tag{1}\]
if all the moduli are relatively prime and \(A\) is within the range \([0,M)\) where \(M=\prod_{i}m_{i}\). Here, \(M_{i}=M/m_{i}\) and \(T_{i}\) is the multiplicative
Fig. 2: Dataflow for the RNS-based analog GEMM operation for a moduli set \(m=\{m_{1},\ldots,m_{n}\}\). The \(n\)\(h\times h\) analog MVM units are represented as generic blocks. The MAC units can be resistive memory (RRAM) arrays, photonic GEMM cores, etc. The dataflow is agnostic to the technology.
inverse of \(M_{i}\), i.e., \(\left.\left|M_{i}T_{i}\right|_{m_{i}}\right.\equiv 1\). The RNS is closed under addition and multiplication operations, thus enabling dot products and GEMM operations in the RNS space.
### _DNN Inference Using RNS_
A DNN consists of a sequence of \(L\) layers. The input \(\vec{X}\) to the \(\ell\)-th layer of a DNN inference is the output generated by the previous \((\ell-1)\)-th layer:
\[\vec{X}_{k}^{(\ell)}=f^{(\ell-1)}\left(\sum_{j}W_{jk}^{(\ell-1)}\vec{X}_{j}^{ (\ell-1)}\right), \tag{2}\]
where \(W\cdot\vec{X}\) is an MVM and \(f(\cdot)\) is an element-wise nonlinear function.
In both digital and analog designs, the precision of MVM is limited by the precision of the inputs and weights, as well as the amount of information kept at the output. In digital accelerators that use low precision, when performing an MVM, costly multiplications are performed with low-precision datatypes (e.g., INT8) and accumulations are performed with high-precision datatypes (e.g., INT32/FP32), as accumulation is cheaper. The final result of the MVM can then be stored as a low-precision number. In contrast, in analog accelerators, the inputs and weights are both multiplied and accumulated with low-precision datatypes. This is because the fixed size of the analog compute array typically does not let the whole GEMM operation (a full DNN layer) be performed at once and each MVM produces a partial output vector which needs to be captured by the ADCs before being stored in a digital memory unit, i.e., SRAM or DRAM, and being accumulated with the rest of the partial output vectors. Here, the ADC precision directly determines how precisely we can capture these partial outputs. While using low-precision ADCs will lead to a higher information loss compared to digital hardware that can severely impact the accuracy of DNN, using a high-precision ADC is not feasible as it is power-hungry. However, this extra loss of information due to using low-precision ADCs can be eliminated by using RNS. Using RNS, Eq. (2) can be rewritten as:
\[\vec{X}_{k}^{(\ell)}=f^{(\ell-1)}\Bigg{(}\text{CRT}\bigg{(}\Big{|}\sum_{j} \big{|}W_{jk}^{(\ell-1)}\big{|}_{\mathcal{M}}\big{|}\vec{X}_{j}^{(\ell-1)} \big{|}_{\mathcal{M}}\Big{|}_{\mathcal{M}}\bigg{)}\Bigg{)}. \tag{3}\]
Here \(\mathcal{M}=\{m_{1},\ldots,m_{n}\}\) represents the set of moduli. The moduli must be chosen to ensure that the outputs of the MVM operations are smaller than \(M=\prod_{i}m_{i}\), which means we need
\[\log_{2}M\geq b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1, \tag{4}\]
for a dot-product between two vectors with \(h\)-elements. This constraint prevents overflow in the computation. Operations for different residues can be performed independently and in parallel without any carry propagation.
Fig. 2 shows the dataflow for the RNS-based analog hardware when performing MVM as part of the DNN inference. Here, \(\overrightarrow{X}_{\text{iIP}}\) represents a high-precision (e.g., FP32) \(h\times 1\) input vector and \(\overrightarrow{W}_{\text{iIP}}\) represents a high-precision \(h\times h\) weight matrix.2 The input vector and the weight matrix have to be mapped to integers to work with RNS. To reduce the quantization effects, we scale both inputs and weights before quantization. We scale \(\overrightarrow{X}_{\text{iIP}}\) by dividing each element of the vector by \(s_{\text{in}}=\text{max}(\big{|}\overrightarrow{X}_{\text{iIP}}\big{|})\). For the \(h\times h\) weight matrix, we scale each row separately. The scaling factors for the \(h\) rows form a vector \(\vec{s}_{w}=[s_{w}[\underline{1}],s_{w}[2],\ldots,s_{w}[h]]=[\text{max}(| \overrightarrow{W}_{\text{iIP}}[1]),\text{max}(|\overrightarrow{W}_{\text{iIP }}[2]|),\ldots,\text{max}(|\overrightarrow{W}_{\text{iIP}}[h]|)]\). In total, there are \(h+1\) floating-point scaling factors: \(h\) for the weight matrix and one for the input vector. These scaled elements are then quantized and mapped to (symmetric) signed integers between \([-2^{b-1}-1,2^{b-1}-1]\) with \(b=b_{\text{in}}\) and \(b=b_{w}\) to obtain the low-precision input vector (\(\overrightarrow{X}_{\text{iLP}}\)) and weight matrix (\(\overrightarrow{W}_{\text{iLP}}\)), respectively. We then perform the modulo operation on each element of the \(\overrightarrow{X}_{\text{iLP}}\) and \(\overrightarrow{W}_{\text{iLP}}\) with respect to each of the \(n\) moduli. This gives us \(n\) pairs of \(\overrightarrow{X}_{\text{iLP},i}\) and \(\overrightarrow{W}_{\text{iLP},i}\) containing the corresponding residues as its elements, where \(i\in\{1,\ldots,n\}\).
Footnote 2: For inputs and weights with dimensions larger than \(h\), one can use standard tilting methods.
In the RNS-based analog core, we have a dedicated analog MVM unit for each modulus. Each analog MVM unit has a set of DACs for converting the associated input vector and weight matrix into the analog domain. An analog MVM is performed (i.e., \(h\) dot products in parallel) between the input vector and the weight matrix, followed by a modulo operation on each output residue vector in the analog domain. Thanks to the modulo operation, the output residues are reduced back to \([0,m_{i})\) range. These output residues are captured
Fig. 3: Distribution of the error in the dot product performed with regular fixed-point analog core and RNS-based analog core. The error is the difference between the dot product calculated in FP32 (FP32 being the ground truth), and dot product calculated in fixed-point and RNS. Both fixed-point and RNS-based analog cores use the same input, weight, DAC, and ADC precision which varies from 4 to 8 in the plot. In the fixed-point core, only the 4-8 MSBs of the output are kept. The RNS-based core uses moduli within the same number of bits of range (see Table I). The results are shown for randomly generated 10,000 vector pairs for each case.
by ADCs. An ENOB of \(\lceil\log_{2}m_{i}\rceil\) (instead of the larger \(b_{\text{out}}\)-bits) is adequate for both DACs and ADCs to perform input and output conversions without any information loss.
The output residues are then converted back to the standard representation in the digital domain via CRT (Eq. (1)) to generate the signed integer output vector, i.e., \(\overrightarrow{Y}_{\text{SI}}\). The elements of the output vector are then scaled back up using the scaling factors calculated before the MVM operation, i.e., \(Y[k]=Y_{\text{SI}}[k]\cdot s_{\text{in}}\cdot s_{w}[k]\). The non-linear function \(f\) (e.g., ReLU, sigmoid, etc.) is performed digitally on the MVM output using floating-point datatypes (e.g., FP32).
### _Precision in the RNS-based Analog Core_
To obtain a desired output precision, we need to find a set of moduli that meets Eq. (4). The RNS range (\(M\)) depends on both the number of moduli (\(n\)) and the values of these moduli. The number of moduli determines the number of operations that we need to perform in the RNS-based core along with the required number of MVM units. Similarly, the bit width of these moduli determine the required precision of the data converters. Both of these in turn determine the efficiency of the RNS-based analog core and should be chosen carefully to find a balance between the energy efficiency and \(M\).
Table I shows the comparison of our RNS-based analog core with example moduli sets and a regular fixed-point analog core3. For the RNS-based core, we picked ADC and DAC precision to be the same as the precision of the input vector and the weight matrix, i.e., \(b=b_{\text{in}}=b_{w}=b_{\text{ADC}}=b_{\text{DAC}}=\lceil\log_{2}m_{i}\rceil\). The moduli sets shown in Table I are created by using the minimum number of moduli that guarantees Eq. (4) for \(h=128\) while keeping the moduli under the chosen bit width \(b\)4. In this case, for \(n\) moduli, \(M\) is \(\approx n\cdot b\) bits. It should be noted that the values of \(h\) and \(\mathcal{M}\) are representative values and our methods can be generalized for any \(h\) and \(\mathcal{M}\). In a regular fixed-point analog core, similar to the RNS-based core, we set \(b=b_{\text{in}}=b_{w}=b_{\text{DAC}}=b_{\text{ADC}}\). However, the bit-precision required to represent the output produced by the dot product (\(b_{\text{out}}\)) is much larger than \(b_{\text{ADC}}\). Therefore, \(b_{\text{out}}-b_{\text{ADC}}\) bits of information (from LSB and up) are lost after every MVM operation.
Footnote 3: Here both cores use fixed-point numbers to represent data. We use the term ‘regular fixed-point core’ for referring to a typical analog core that uses fixed-point numbers for computation and do not use RNS, whereas the RNS-based core uses integers as residues and performs computations in the RNS space.
Footnote 4: We choose \(h\) to be 128 considering the common layer sizes in the evaluated MLPerf benchmarks. The chosen \(h\) provides high throughput with a high utilization of the GEMM core.
Fig. 3 reports the absolute errors (FP32 being the ground truth) observed when performing dot-products (sub-operations in MVMs) with the RNS-based and fixed-point analog cores. Both cores use the configurations described in Table I for the example vector size \(h=128\). In the regular fixed-point core, information loss due to the dropped bits causes a 9-15\(\times\) larger error compared to the RNS-based dot product with the same input and weight bit precision.
We next compare the performance of an RNS-based analog core against a fixed-point analog core for different MLPerf benchmarks as shown in Fig. 4. We report accuracy normalized to the accuracy of a FP32 hardware. Our results show that the RNS approach significantly ameliorates the accuracy drop caused by the low-precision ADCs in the fixed-point analog core for all the benchmark networks. We observe that it is possible to achieve \(\geq\)99\(\%\) of FP32 accuracy (this cut-off is defined in the MLPerf benchmark [23]) for all MLPerf networks when using residues with as low as \(6\) bits.
## IV Redundant RNS for Fault Tolerance
Any analog compute core is sensitive to noise. Furthermore, in the case of RNS, even small errors in the residues result in a large error in the corresponding integer the residues represent. We use the Redundant Residue Number System (RRNS) [29] to perform error detection and correction to improve the fault tolerance of the RNS-based analog core. RRNS uses a total of \(n\) moduli: \(k\) non-redundant and \(n-k\) redundant. An RRNS(\(n,k\)) code can detect up to \(n-k+1\) errors and can correct up to \(\lfloor\frac{n-k}{2}\rfloor\) errors. In particular, the error in the codeword (i.e., the \(n\) residues representing an integer in the RRNS space) can be one of the following cases:
* **Case 1:** No error or correctable error (fewer than \(\lfloor\frac{n-k}{2}\rfloor\) residues have errors),
* **Case 2:** Detectable but not correctable error (more than \(\lfloor\frac{n-k}{2}\rfloor\) residues have errors, and the erroneous codeword does not overlap with another codeword in the RRNS space),
* **Case 3:** Undetectable error (more than \(n-k+1\) residues have errors, the erroneous codeword overlaps with another codeword in the RRNS space).
By using RRNS, errors can be detected by using a voting mechanism wherein we divide the total \(n\) output residues into \(C_{k}^{n}\) groups with \(k\) residues per group. One simple way of voting is to convert the residues in each group back to the standard representation via CRT to generate an output vector for each group5. Then, we compare the results of the \(C_{k}^{n}\) groups. If more than \(50\%\) of the groups have same result in the standard representation, then the generated codeword is correct. This corresponds to **Case 1**. In contrast, not having a majority indicates that the generated codeword is erroneous and cannot be corrected. This corresponds to **Case 2**. In this case, the detected errors
Fig. 4: Accuracy of regular fixed-point and RNS-based cores on MLPerf datacenter benchmarks. The accuracy numbers are normalized to the FP32 accuracy. Both \(b\)-bit fixed-point cores and \(b\)-bit RNS-based cores use \(b\)-bit input and weight DACs and \(b\)-bit output ADCs. The RNS-based cores use \(b\)-bit moduli to represent numbers (see Table I). All experiments use FP32 as the intermediate datatype for non-GEMM operations.
can be eliminated by repeating the dot product and voting steps, and recalculating the result. In **Case 3**, there are more than \(n-k+1\) erroneous residues and the erroneous codeword generated by majority of the groups overlaps with another codeword. As a result, the voting mechanism incorrectly determines that the generated codeword is the correct codeword, i.e., the errors in the residues go undetected.
To understand this better, for a single codeword and a single attempt, assume \(p_{c}\) is the probability that there are no errors or the errors are correctable (**Case 1**), \(p_{d}\) is the probability that errors are detectable but not correctable (**Case 2**), and \(p_{u}\) is the probability that the errors are undetectable (**Case 3**). In this case, \(p_{c}+p_{d}+p_{u}=1\). The error probabilities \(p_{c},p_{d}\) and \(p_{u}\) for a probability of error in a single residue \((p)\) can be computed using equations formulated by James et al. [24] and Peng et al. [29].6 For \(R\) repeated attempts of performing a dot product to correct the error, the probability of having an erroneous output codeword (\(p_{\text{err}}\)) is
Footnote 6: The detailed equations not shown in this paper due to space constraints.
\[p_{\text{err}}(R)=1-p_{c}\sum_{k=1}^{R}(p_{d})^{k}. \tag{5}\]
As we increase the number of attempts, the output error probability decreases and converges to: \(\lim_{R\rightarrow\infty}p_{\text{err}}(R)=p_{u}/(p_{u}+p_{c})\).
The values of \(p_{\text{err}}\) for different numbers of redundant moduli \((n-k)\), numbers of attempts \((R)\), and moduli sets (with different number of bits) are plotted in Fig. 5. Broadly, as the probability of a single residue error \(p\) increases, the output error probability tends to \(1\). For a given number of attempts, increasing bit-precision and the number of redundant moduli decreases \(p_{\text{err}}\). For a fixed number of redundant moduli and a fixed number of bits per moduli, \(p_{\text{err}}\) decreases as the number of attempts increases.
We investigated the impact of the noise on the accuracy of the MLPerf benchmarks when using RRNS. We observe similar behavior in different networks, thus we only show the accuracy results for ResNet50 and BERT-large in Figure 6. Broadly, adding extra moduli and increasing the number of attempts decrease \(p_{\text{err}}\) and help maintain the accuracy for higher \(p\) values.
ResNet50 requires \(\sim\)\(3.9\) GigaMAC operations (GOp) for one inference on a single input image. For a \(128\times 128\) MVM unit, the total number of MVM outputs is \(\sim\)\(29.4\)M. For all output values to be correct, \(p_{\text{err}}\leq 1/29.4\)M = \(3.4\times 10^{-8}\). This \(p_{\text{err}}\) value needs to be \(\leq 1/358.6\)M = \(2.8\times 10^{-9}\) for BERT-large. However, from Fig. 6, we observe that the DNNs are resilient to noise and the tolerable \(p_{\text{err}}\) is much higher than the calculated numbers. The accuracy of ResNet50 only starts degrading (below \(99\%\) FP32) when \(p_{\text{err}}\approx 4.9\times 10^{-5}\) (\(\sim\)\(10^{3}\times\) higher than the estimated value) on average amongst the experiments shown in Figure 6. This cut-off probability is \(p_{\text{err}}\)\(\approx\)\(4\times 10^{-4}\) for BERT-large (on average \(\sim\)\(10^{5}\times\) higher than the estimated value).
Fig. 5: Output error probability (\(p_{\text{err}}\)) for varying number of error correction attempts and number of redundant moduli \((n-k)\) used
Fig. 6: ResNet-50 (top row) and BERT-large (bottom row) accuracy for varying number of attempts and number of redundant moduli used.
## V Energy Efficiency of RNS-Based Core
In this section, we show that an RNS-based analog core is more energy efficient than their fixed-point counterparts _for the same precision_. High-precision data converters--particularly ADCs--dominate the power consumption in analog DNN accelerators [17, 18]. Our RNS approach alleviates this high power consumption of ADCs without compromising accuracy. To quantify our findings, we estimate the energy consumption of a DAC per conversion as:
\[E_{\text{DAC}}=\text{ENOB}^{2}C_{u}V_{\text{DD}}^{2}, \tag{6}\]
where \(C_{u}=0.5\) fF is a typical unit capacitance and \(V_{\text{DD}}\) = 1V is the supply voltage [19]. The energy consumption of an ADC per conversion is estimated as:
\[E_{\text{ADC}}=k_{1}\text{ENOB}+k_{2}4^{\text{ENOB}}, \tag{7}\]
where \(k_{1}{\approx}100\) fJ and \(k_{2}{\approx}1\) aJ [19, 31]. The exponential term (i.e., \(k_{2}4^{\text{ENOB}}\)) dominates at large ENOB (after \({\sim}10\)-bits).
Fig. 7 shows total \(E_{\text{DAC}}\) and \(E_{\text{ADC}}\) for both the RNS-based and the regular fixed-point analog hardware configurations that were shown previously in Table I. Here, different from Table I, \(b_{\text{ADC}}=b_{\text{out}}\) for the regular fixed-point core to achieve the same precision as the RNS approach. To achieve the same MVM throughput in both cores, the RNS-based core with \(n\) moduli must use \(n\) MVM units--leading to \(n\) DACs and \(n\) ADCs. From Figure 7, we observe that ADCs have approximately three orders of magnitude higher energy-consumption compared to DACs with the same ENOB. Energy of an ADC is exponentially dependent on ENOB, and so even with \(n\) ADCs in the RNS-based core, its energy consumption is still \(168{\times}\) to \(6.8\)M\({\times}\) lower than the regular fixed-point core.
Moreover, besides data converters, RNS reduces the required signal-to-noise ratio (SNR) in the analog MVM units. The energy consumption of the analog MVM unit depends on the SNR for the analog signals, and this SNR increases exponentially with the desired compute precision. Thus, RNS brings additional savings by allowing the MVM units to work with lower SNR in the analog domain.
In the case of using RRNS, the number of DACs, ADCs, and analog MVM units will increase linearly with each added redundant moduli. Although this increases \(n\) and the energy consumption of data converters linearly, considering the high energy efficiency gains (up to six orders of magnitude) over the fixed-point cores, this extra cost for extra moduli is tolerable. Here, the number of redundant moduli and number of attempts required to achieve fault-tolerant computing is dependent on the the noise distribution in the chosen technology and the DNN model.
Different from the regular fixed-point hardware, the use of RNS requires forward and reverse conversion between the RNS and the standard representation, and modulo operation in the analog domain. The forward conversion is simply a modulo operation whereas reverse conversion is performed via CRT (See Eq. (1)). We synthesized the RTL design of these conversion circuits using ASAP 7nm technology library [32]. The modulo operations are optimized using Barrett Reduction [33]. These converters consume up to \({\leq}0.1\) pJ per conversion (forward and reverse in total), which is negligible. Analog modulo can be performed in different ways depending on the analog technology used. One can use ring oscillators [34] to perform modulo operation electrically. This approach uses an odd number of \(m\) inverters to oscillate the signal. The location of the falling/rising signal winds back after each \(m\) cycles as the signal oscillates. By keeping track of the start and the end location of the signal between a specific time interval proportional to the value \(x\), the modulo operation of \(x\) against \(m\) can be obtained. As a set of inverters is a trivial circuitry, the ring oscillator does not add much to the energy consumption. Alternatively, one can perform modulo optically by using the phase in the optical domain. Addition with phases is effectively a modular addition against \(2\pi\) as the phase values wind back at every \(2\pi\). The phase of a signal at the output of a row of \(h\) phase shifters will be \(|\Sigma_{h=0}^{h}\phi_{l}|_{2\pi}\). Multiplying the values with \(2\pi/m\) before the modular addition enables one to perform modulo against arbitrary \(m\) instead of only \(2\pi\). Modulo operation can thus be performed along with accumulation without any additional cost.
## VI Conclusion
In this paper, we present a generic dataflow framework for an RNS-based analog accelerator that is agnostic of the analog technology. We show that an RNS-based analog core provides \(99\%\) FP32 accuracy for state-of-the-art DNNs by using 6-bit arithmetic with RNS. We provide an error detection and correction method using RRNS that improves the fault-tolerance of the analog core against noise. We also show that our RNS-based analog hardware can reduce the data conversion energy by multiple orders of magnitude compared to a fixed-point analog hardware at the same precision. While we provide a generic perspective, for a desired analog technology, one can explore the various trade-offs discussed in this paper to optimize the accelerator micro-architecture for further energy efficiency. Overall, we believe that RNS is a promising numeral system for the development of the next-generation energy-efficient fault-tolerant analog accelerators.
|
2303.17061 | A Tensor-based Convolutional Neural Network for Small Dataset
Classification | Inspired by the ConvNets with structured hidden representations, we propose a
Tensor-based Neural Network, TCNN. Different from ConvNets, TCNNs are composed
of structured neurons rather than scalar neurons, and the basic operation is
neuron tensor transformation. Unlike other structured ConvNets, where the
part-whole relationships are modeled explicitly, the relationships are learned
implicitly in TCNNs. Also, the structured neurons in TCNNs are high-rank
tensors rather than vectors or matrices. We compare TCNNs with current popular
ConvNets, including ResNets, MobileNets, EfficientNets, RegNets, etc., on
CIFAR10, CIFAR100, and Tiny ImageNet. The experiment shows that TCNNs have
higher efficiency in terms of parameters. TCNNs also show higher robustness
against white-box adversarial attacks on MNIST compared to ConvNets. | Zhenhua Chen, David Crandall | 2023-03-29T23:23:01Z | http://arxiv.org/abs/2303.17061v1 | # A Tensor-based Convolutional Neural Network for Small Dataset Classification
###### Abstract
Inspired by the ConvNets with structured hidden representations, we propose a Tensor-based Neural Network, TCNN. Different from ConvNets, TCNNs are composed of structured neurons rather than'scalar' neurons, and the basic operation is neuron tensor transformation. Unlike other structured ConvNets, where the part-whole relationships are modeled explicitly, the relationships are learned implicitly in TCNNs. Also, the structured neurons in TCNNs are high-rank tensors rather than vectors or matrices. We compare TCNNs with current popular ConvNets, including ResNets, MobileNets, EfficientNets, RegNets, etc., on CIFAR10, CIFAR100, and Tiny ImageNet. The experiment shows that TCNNs have higher efficiency in terms of parameters. TCNNs also show higher robustness against white-box adversarial attacks on MNIST compared to ConvNets.
## 1 Introduction
ConvNets with structured hidden representations, like Transformers [31] and CapsNets [12], have shown huge potential in varied computer vision and NLP tasks. Inspired by them, we propose a Neural Tensor-based Network, TCNN. Different from Transformers or CapsNets, which focus on structured representations, TCNN focuses on structured neurons. Using structured neurons, rather than'scalar' neurons, is the key idea that separates from TCNNs from ConvNets. The structured neurons are the basic unit across the whole network rather than as special structures in Transformers and CapsNets. What's more, the structured neurons in TCNNs are high-rank tensors, which can help save parameters compared neuron vectors or neuron matrices.
Also, unlike CapsNets [12], STN [15], or Transformers [31], the information of position, size, or hue in TCNNs is not encoded explicitly. Instead, the information is learned implicitly during training in TCNNs. At the same time, TCNNs still keep the benefit of structured neurons, namely, the neurons within one tensor behave as a whole and can thus better model the spatial relationships compared to ConvNets.
Both ConvNets and TCNNs have convolution-like operations. ConvNets are based on the linear combinations of scalars while TCNNs are based on the linear combinations of structured tensors. In other words, **we replace each neuron in ConvNets with a high-rank neuron tensor**, as shown in Figure 3. For ConvNets, we apply linear combinations across different layers directly since only scalars are involved, as Figure 3 (a) shows. For TCNNs, the hidden representation tensors from a lower layer to a higher layer may have different shapes and dimensions. We have to apply tensor transformations ahead since only tensors with exactly the same shape can be linearly combined. As Figure 3 (b), (c), (d) show, we can transform each input tensor to have the same tensor shape (as Figure 3 (b) shows), compressed tensor shape (as Figure 3 (c) shows), or amplified tensor shape(as Figure 3 (d) shows). Here we define'same', 'compressed', and 'amplified' based on whether the output tensors have the same, lower, and higher ranks compared
Figure 1: Model Size versus CIFAR10 Accuracy. Our TCNNs outperform other ConvNets in terms of efficiency. In particular, tcn0 achieves the second-best performance by using far fewer parameters. Please check Table 2 for details.
to the input tensors. In conclusion, the basic operation of ConvNets is a linear combination of scalars while the basic operation of TCNNs is a linear combination of tensors.
In this paper, we focus on two aspects of TCNNs, parameter-wise efficiency and adversarial robustness. In particular, we build several versions of TCNNs with a different number of parameters, then compare them with several ConvNets (including ResNets [9], MobileNets [27], EfficientNets [1], and their variants, etc.) on three small datasets (CIFAR10, CIFAR100, Tiny ImageNet). Also, given the similarity between TCNNs and CapsNets, we design several simplified TCNNs and compare them with several CapsNets variants on MINIST. Finally, we evaluate the adversarial robustness of TCNNs by comparing it with several ConvNets with different numbers of parameters and epsilon thresholds.
## 2 Related Work
### Efficiency of ConvNets
The efficiency of neural networks can be evaluated in multiple dimensions, including the number of parameters, FLOPS, memory swallowed, inference time, etc. The number of parameters used is one of the key factors. The reason is that ConvNets has been well known for its overparameterization, which limits its application on resource-limited scenarios, like mobile devices. There are two ways to soften this issue, one is compressing neural networks [6, 11, 16, 34]. The other one is designing superior neural network structures, like SqueezeNets [13], MobileNets [27], ResNets [9], EfficientNets [1], ShuffleNets [36], etc. One can even further reduce the model size by fitting particular tasks or devices [2, 35, 29]. Although we also emphasize the efficiency of TCNNs, we do not do any structure searching or compression, we only show the efficient feature of TCNNs by comparing it with the several efficient ConvNets.
### Structured hidden representations
TCNNs are composed of structured neurons that can also be considered structured hidden representations. Similarly, CapsNets [12] and Transformers [31] also fall into this slot. In particular, the capsules in CapsNets [12] can be encoded by a pose matrix and an activation value. Then the encoded information is further sent to the higher capsule layer via the routing procedures. The routing procedures provide a probability vector to determine where to send the capsules in the lower layer. The probability vector is calculated based on the capsule similarities between adjacent capsule layers. Transformers, on the other hand, encode representations into key, query, and value triplets. The key and query are vectors dealing with different parts of the input and later the attention distribution will be calculated to find the relations between different parts of the input and the corresponding representations. We can see here both the matrix (vector) capsules and the key/query vectors in transformers encode representations in a structured way, as Figure 2 shows. TCNNs are different from CapsNets and Transformers in three ways,
* The capsules (pose matrix and presence probability are encoded) and attention heads (position information is encoded) are considered special structures in ConvNets while TCNNs treat structured neuron tensors as a general unit.
* Different from the capsule or attention transformations, in which matrix multiplications are applied, TCNNs adopt tensor products.
* TCNNs do not use similarity-based routing or distribution procedures.
#### 2.2.1 Transformers
Transforms typically use a multi-headed attention mechanism. The assumption is that each attention head has a separate projection of the representations, and multi-head attention can thus take advantage of multiple representations subspaces. The representations are composed of \((key,value,query)\) triplets. In particular, each triplet contains three matrices \((K,Q,V)\). A linear transformation is applied between representations in adjacent layers, as Equa
Figure 2: The similarity between CapsNets and Transformers. CapsNets adopt routing procedures to determine the connection strength between adjacent layers while transformers employ self-attention to determine the information flow from different input parts to the representations in the next layer. The attention weights in transformers are similar to the coupling coefficients in CapsNets.
tion 1 shows,
\[att_{i}\left(K_{i},Q_{i},V_{i}\right)=softmax\left(\frac{Q_{i}K_{i}^{T}}{d_{i}} \right)V_{i} \tag{1}\]
Where \(d_{i}\) is the length is \(K_{i}\). When the attention heads are stacked and transformed linearly, we get the values of a multi-head attention, as Equation 2 shows,
\[multi\_att\left(K,Q,V\right)=[att_{0},stt_{1},\dots att_{n}]W \tag{2}\]
Where \(W\) is the linear transformation matrix after the attention heads are stacked on top of each other. Fundamentally, the representations of a higher layer are weighted combinations of the representations in the lower layer. The weights are calculated based on the similarities between queries in the higher layer and keys in the lower layer.
#### 2.2.2 CapsNets
CapsNets [26] organize neurons as capsules to mimic the biological neural systems. Different from normal neural networks, which adopt neurons as basic units, CapsNets use a group of neurons as capsules. A typical CapsNet is composed of several convolutional layers, a final fully-connected capsule layer with a routing procedure, and a loss function. One key design of CapsNets is the routing procedure which can combine lower-level features with higher-level features to better model hierarchical relationships.
The above two key designs make CapsNets more efficient for encoding intrinsic spatial relationships among features (parts or a whole) than ConvNets. For example, the CapsNet with dynamic routing [26] can separate overlapping digits accurately, while the CapsNet with EM routing [12] achieves a lower error rate on smallNORB [19]. In contrast, ConvNets are usually overparameterized. As shown in [20, 21, 33, 30, 28], their compressed/pruned neural networks have much smaller sizes with hardly any accuracy drop. As a result, CapsNets usually need a lot fewer parameters when reaching the same accuracy.
Although CapsNets has shown high efficiency in terms of accuracy in varied datasets, there are several bottlenecks. One is the heuristic routing procedure. The other one is the non-capsule layers that demote the efficiency of CapsNets. In TCNNs, we preserve the idea of packaging the
Figure 3: The comparison between ConvNets (a) and TConNets (b, c, d). (a) shows a normal convolution that maps a matrix to a scalar. In (b), (c) and (d), Tensor based convolutions map input tensors to the same, lower order of output tensors.
neurons and further generalize the capsules to high-order tensors across all the layers in a neural network.
### Adversarial Robustness of ConvNets
ConvNets have been found vulnerable to adversarial attacks [8]. Multiple tricks have been developed to counteract the vulnerabilities, including adversarial training [7], sparse approximation [25, 3], robust neural structures [12], etc. Adversarial training explores adversarial attack techniques to generate adversarial samples and then put them back into the training cycle. Sparse approximation reframes each layer of a neural network as a sparse coding problem so that the new model with sparsified parameters becomes more robust. However, adversarial training needs extra training time, and sparse approximation may result in performance degradation. Exploring robust neural network structures might be a better choice. CapsNets [12] is one of the examples. We find that TCNNs not only generalize better than normal ConvNets but also show better robustness against white-box adversarial attacks.
## 3 Tensor-based Convolutional Neural Networks
### Neuron Tensor Transformation
Neuron tensors are the basic units of TCNNs across whole neural networks. From a lower layer to a higher layer, we use neuron tensor transformations. For example, we can transforms an input tensor \(\textbf{U}\in\mathbb{R}^{1\times 2\times 3\times 4}\) to an output tensor \(\textbf{V}\in\mathbb{R}^{1\times 2\times 7\times 8}\) via a tensor product operation with \(\textbf{W}\in\mathbb{R}^{4\times 3\times 7\times 8}\). This step plays a similar role as the head projection + attention distribution in Transformers, or capsule transformation + routing in CapsNets, but more general.
We consider the neuron tensor transformation as a general form of convolution. The basic operation of ConvNets is a linear combination of scalars, as Figure 3 (a) shows. In comparison, the basic operation of TCNNs is the tensor transformation which is a tensor product mathematically. As Figure 3 (b), (c), (d) show, we can preserve, compress or enlarge the tensors' dimensions from one layer to the next layer. After the tensor transformation, we can apply a linear combination of tensors to acquire the tensors in the output. Each tensor in TCNNs plays the same role as a scalar in ConvNets.
The direct benefit we can get from tensor transformation is saving parameters, which is similar to capsule transformations in CapsNets. For example, the capsules in [26] have dimensionality \(1152\times 10\times 8\times 16\) which can convert each 8-dimensional tensor in the lower layer into a 16-dimensional tensor in the higher layer (\(32\times 6\times 6=1152\) is the input number and 10 is the output number). We need a total of \(1152\times 10\times 8\times 16=1474560\) parameters. If we package each input/output vector into \(4\times 2\) and \(4\times 4\) matrices, we need only \(1152\times 10\times 2\times 4=92160\) parameters. This is the policy adopted by [12] in which 16-dimensional tensors are converted into new 16-dimensional tensors by using \(4\times 4\) tensors. In this way, the total number of parameters is reduced by a factor of 15. The required parameters can be further reduced by using higher-rank tensors. To simplify the work of designing a TCNN, we adopt a tensor transformation the way in Figure 3 (b) across all layers, namely the tensors' rank does not change across layers.
\[\textbf{V}_{i}=\textbf{U}_{i}\bigotimes\textbf{W}_{i} \tag{3}\]
For example, for all the intermediate layers, we can use the basic unit of input (\(\textbf{U}\in\mathbb{R}^{4\times 4\times 4\times 4}\)), output (\(\textbf{V}\in\mathbb{R}^{4\times 4\times 4\times 4}\)) and neuron tensors (\(\textbf{W}\in\mathbb{R}^{4\times 4\times 4\times 4}\)), they are all rank-4 tensors.
One exception is the first layer. After all, we do not have rank-4 tensors in the pixel space. For the first layer, we enlarge the tensor dimension the way of Figure 3 (d). For example, we can transform \(\textbf{U}^{in}\in\mathbb{R}^{3\times 1}\) to \(\textbf{V}\in\mathbb{R}^{4\times 4\times 4\times 4}\) by using neuron tensors \(\textbf{W}^{in}\in\mathbb{R}^{1\times 3\times 4\times 4\times 4\times 4}\).
\[\textbf{V}_{i}=\textbf{U}_{i}^{in}\bigotimes\textbf{W}_{i}^{in} \tag{4}\]
Another exception is the final layer. We need to compress the tensors of the last layer the way of Figure 3 (c) from \(\textbf{U}\in\mathbb{R}^{4\times 4\times 4\times 4}\) to \(\textbf{V}^{f}\in\mathbb{R}^{1\times 1\times 1\times 1}\) to fit a loss function. The neuron tensors is still \(\textbf{W}^{in}\in\mathbb{R}^{4\times 4\times 4\times 4}\), with all the dimensions to contract.
\[\textbf{V}_{i}^{f}=\textbf{U}_{i}\bigotimes\textbf{W}_{i} \tag{5}\]
At the final layer of TCNNs, we set the number of feature maps the same as the number of classes, and the tensors are commpressed to scalars. Thus final output of TCNNs becomes the same as the final feature maps of ConvNets. With this design, TCNNs can share the same loss functions as ConvNets. Tensor transformation is the key difference between TCNNs and ConvNets. Other than that, TCNNs are quite similar to ConvNets. For exampel, TCNNs also share the same concept of kernels, strides, pads, feature maps, loss functions, etc.
### Linear Combinations of Tensors
After the tensor transformation step, TCNNs apply liner combinations, which can be defined as,
\[\textbf{V}=\sum_{i}^{n}\textbf{V}_{i} \tag{6}\]
Where \(n=k\times k\times m\), \(k\) is the kernel size and \(m\) is the number of input channels. In comparison, the basic operation of ConvNets is linear combinations of scalars, \(V=\sum_{i}^{n}V_{i}\).
### Loss
One challenge of TCNNs is designing a loss function compatible with tensors as the input. We want to make TCNNs compatible with the loss functions of ConvNets. Thus we compress the tensors to scalars and set the number of feature maps the same as the number of classes at the final layer to feed a cross-entropy loss function.
### Residual TCNNs
To further improve the performance of TCNNs, we follow the idea of ResNets [9] and propose the residual structures of TCNNs, as Figure 4 shows. Similar to the classic ResNets [9], we build triple (as Figure 4 left shows) or quadruple layer skips (as Figure 4 right shows) and make sure the feature maps between the skip connections to have the same shape, including the tensor dimensions.
## 4 Parameter-wise Efficiency
Data augmentation acts as a regularizer which is helpful to reduce the overfitting caused by the overparameterization of ConvNets. To reduce the potential impact of the regularizers, and test the efficiency of different network structures merely, we use only the original data and input size, neither data augmentation techniques (resizing, cropping, flipping, etc.) nor prior knowledge (normalization) are used.
We choose several network structures, including ResNets [9], EffcientNets [1], ShuffleNets [36], MobileNets [27], ConvNeXt [22], ResNeXt [32], RegNet [24], etc., as the benchmarks. These networks focus either on efficiency, performance, or both. We apply these models and TCNNs on CIFAR10, CIFAR100, and Tiny ImageNet for comparison.
For all the test cases (a test case is a combination of a network structure and a dataset), we use the same meta parameters and stop training once the loss in the validation set no longer decreases and report the accuracy in the validation set. For each network structure, we change the final layer's output numbers to adapt to different class numbers. The neuron tensors in TCNNs are initialized with [10].
### Cifar10
The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. The TCNN we use is shown in Table 1. We use 0.001 as the learning rate, 64 as the batch size. We use the cross entropy loss and Adam optimizer. The accuracy is reported once the validation loss stops decreasing. The result is shown in Table 2 and Figure 1. We can see that our model, tcnn0, achieves second best performance by using fewer parameters.
### Cifar100
CIFAR100 has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class. We use the same meta parameters as in section 4.1. The TCNN structure is shown in Table 3. For the benchmark models, we change their final layers' output to 100. As Table 2 and Figure 5 show, our model, tcnn1, achieves the best performance by using far fewer parameters than most models.
### Tiny ImageNet
Tiny ImageNet [18] is a subset of the ImageNet dataset [5], which contains 100,000 images of 200 classes (500 for each class) downsized to 64x64. We use the same meta parameters as in section 4.1 and section 4.2. The
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline _\#Layers_ & _Neural Tensors_ & _\#Channel_ & _Output_ \\ \hline
0 & **Block\#1** & 1 & \(15\times 15\times\textbf{V}\) \\
1 & **Block\#1** & 1 & \(15\times 15\times\textbf{V}\) \\
2 & **Block\#1** & 1 & \(7\times 7\times\textbf{V}\) \\
3 & **Block\#1** & 1 & \(7\times 7\times\textbf{V}\) \\
4 & **Block\#1** & 1 & \(3\times 3\times\textbf{V}\) \\
5 & **Block\#1** & 1 & \(3\times 3\times\textbf{V}\) \\
6 & \(3\times 3\times\textbf{W}^{f}\) & 10 & \(1\times 1\times\textbf{V}^{f}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The structure of **tcnn0**. The neuron tensor of _Layer#0_ is \(\textbf{W}^{in}\in\mathbb{R}^{1\times 3\times 6\times 6\times 6\times 6}\) that can transforms each input tensor \(\textbf{U}^{in}\in\mathbb{R}^{3\times 1}\) to an output tensor \(\textbf{V}\in\mathbb{R}^{6\times 6\times 6\times 6}\). _Layer#0_ is followed by a BatchNorm Layer and a PReLU layer. So are the following layers. In the final year, the output tensors are compressed to scalars (from \(\textbf{U}\in\mathbb{R}^{6\times 6\times 6\times 6}\) to \(\textbf{V}^{f}\in\mathbb{R}^{1\times 1\times 1\times 1}\)). The number of channels of the final layer is 10, namely the class number.
Figure 4: Two TCNN residual structures. Left: the triple skips block, **Block#1** Right: the quadruple skips block, **Block#2**. \(W\in\mathbb{R}^{k_{1}\times k_{2}\times k_{3}\times k_{4}}\) is the neuron tensor. Each layer is followed by a PReLU [10] layer and a Batch Normalizaiton Layer [14]
TCNN structure is shown in Table 4. We can see that our model tcnn2 achieves the second-best performance by using fewer parameters.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline _Datasets_ & _Model_ & _Acc._ & _\#Params_ & _Ratio-to-TCNNs_ & _\#Epochs_ \\ \hline \multirow{9}{*}{CIFAR10 [17]} & shufflenet\_v2\_x0\_5 [36] & 60.5\% & **0.35M** & 0.9x & 9 \\ & **tcnn0** & 75.1\% & 0.39M & 1x & 13 \\ & mobilenet\_v3\_small [27] & 63.2\% & 1.53M & 3.92x & 13 \\ & mobilenet\_v2 [27] & 73.0\% & 2.24M & 5.74x & 17 \\ & regnet\_v400mf [24] & 68.0\% & 3.91M & 10x & 14 \\ & EfficientNet-B0 [1] & 70.0\% & 4.0M & 10.26x & 8 \\ & mobilenet\_v3\_large [27] & 56.7\% & 4.21M & 10.79x & 18 \\ & regnet\_y800mf [24] & 33.5\% & 5.66M & 14.51x & 6 \\ & EfficientNet-B1 [1] & 72.9\% & 6.5M & 16.67x & 14 \\ & EfficientNet-B2 [1] & 75.0\% & 7.72M & 19.79x & 20 \\ & EfficientNet-B3 [1] & **76.8**\% & 10.71M & 27.46x & 38 \\ & resnet18 [9] & 73.9\% & 11.18M & 28.67x & 6 \\ & resnet34 [9] & 74.2\% & 21.29M & 54.59x & 8 \\ & resnext50\_32x4d [32] & 75.0\% & 23M & 58.97x & 19 \\ & convnext\_tiny [22] & 60.4\% & 27.82M & 71.33x & 5 \\ & convnext\_small [22] & 59.8\% & 49.45M & 126.79x & 5 \\ \hline \multirow{9}{*}{CIFAR100 [17]} & shufflenet\_v2\_x0\_5 [36] & 32.3\% & **0.44M** & 0.31x & 14 \\ & **tcnn1** & **46.9\%** & 1.4M & 1x & 6 \\ & mobilenet\_v3\_small [27] & 33.8\% & 1.62M & 1.16x & 28 \\ & mobilenet\_v2 [27] & 38.8\% & 2.35M & 1.68x & 20 \\ & regnet\_v400mf [24] & 31.9\% & 3.95M & 1.82x & 8 \\ & EfficientNet-B0 [1] & 35.0\% & 4.14M & 2.96x & 21 \\ & regnet\_y\_800mf [] & 38.2\% & 5.73M & 4.09x & 10 \\ & EfficientNet-B1 [1] & 39.9\% & 6.64M & 4.74x & 8 \\ & EfficientNet-B2 [1] & 44.3\% & 7.84M & 5.6x & 14 \\ & resnet18 [9] & 42.9\% & 11.23M & 8.02x & 9 \\ & resnet34 [9] & 42.4\% & 21.34M & 15.25x & 7 \\ & resnext50\_32x4d [32] & 36.4\% & 23.18M & 16.56x & 10 \\ & convnext\_tiny [22] & 30.4\% & 27.89M & 19.9x & 5 \\ & convnext\_small [22] & 32.6\% & 49.52M & 35.37x & 8 \\ \hline \multirow{9}{*}{Tiny ImageNet [18]} & shufflenet\_v2\_x0\_5 [36] & 28.6\% & **0.55M** & 0.37x & 13 \\ & **tcnn2** & 34.4\% & 1.48M & 1x & 6 \\ & mobilenet\_v3\_small [27] & 28.7\% & 1.72M & 1.16x & 18 \\ & mobilenet\_v2 [27] & 32.4\% & 2.35M & 1.59x & 16 \\ & regnet\_y\_400mf [24] & 29.3\% & 3.99M & 2.69x & 12 \\ & EfficientNet-B0 [1] & 32.4\% & 4.26M & 2.88x & 13 \\ & mobileNet\_v3\_large [27] & 29.1\% & 4.46M & 3.01x & 15 \\ & regnet\_y\_800mf [24] & **35.2\%** & 5.8M & 3.92x & 8 \\ & EfficientNet-B1 [1] & 32.2\% & 6.77M & 4.57x & 17 \\ & EfficientNet-B2 [1] & 31.3\% & 7.98M & 5.39x & 16 \\ & EfficientNet-B3 [1] & 32.2\% & 11.0M & 7.43x & 20 \\ & resnet18 [9] & 33.0\% & 11.28M & 7.62x & 5 \\ & resnet34 [9] & 34.2\% & 21.39M & 14.45x & 5 \\ & convnext\_tiny [22] & 26.9\% & 27.97M & 18.89x & 9 \\ & convnext\_small [22] & 27.6\% & 49.59M & 33.51x & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: TCNNs Performance Results on CIFAR10, CIFAR100, and Tiny ImageNet. The epoch# here is the epoch number when the lowest validation loss is recorded.
### Mnist
Although TCNNs show higher efficiency than several neural networks, a special residual structure is used. To get rid of the potential impact of this structure, we also design a couple of TCNNs without residual structures and compare them with plain ConvNets on MNIST. In particular, we design four TCNNs and four ConvNets. All of them have a width of one and a depth of four. The ConvNets contain two convolutional layers and two fully-connected layers while TCNNs contain four tensor-based convolutional layers. For the ConvNets, we adjust the number of feature maps and the number of output units to manipulate the number of parameters. Similarly, we manipulate the number of parameters by adjusting the number of feature maps as well as the tensor dimension (each neuron tensor \(\mathbf{T}^{in}\in\mathbb{R}^{2\times 2\times 2\times 2}\)). As Figure 7 shows, TCNNs show much higher efficiency compared to plain ConvNets.
Given the similarity between CapsNets and TCNNs, we also compare TCNNs with several CapsNets variants in terms of efficiency. As Table 5 shows, our two versions of TCNNs defeat all the CapsNets variants.
other possible reason is probably the tiny input size. Larger input sizes can usually result in higher accuracy [1]. We use the original input size rather than enlarged ones for all the datasets. Without data augmentation and/or larger input sizes, these models can easily get overfitting. For example, Both resnet34 [9] and EfficientNet-b3 [1] can get close to 100% accuracy on the training set of Tiny ImageNet while the accuracy on the validation set is unchanging at around 30%.
The third reason could be that these models are designed to use a large number of parameters to get good performance on large datasets. Thus it is noteworthy to compare TCNNs with the smaller versions of these models. We can see from Table 2 that our tcnn0 achieves much better performance than shufflenet_v2_x0_5 on CIFAR10 with comparable parameters (0.35M versus 0.39M). Similarly, tcnn1 and tcnn2 achieve better performance than mobilenet_v3_small [27] with comparable parameters on CIFAR100 and Tiny ImageNet respectively.
Another interesting question is whether TCNNs can have a good performance on large datasets like ImageNet. We believe so. However, we did not try it due to the slow training process. We believe that the current deep learning frameworks kind of overfit ConvNets. As a result, a neuron tensor transformation has to be translated into a large number of convolutions, which makes the training process very slow.
## 5 Adversarial Robustness
We can see that TCNNs generalize better than ConvNets with fewer parameters. Better generalization may result in better robustness since more unseen adversarial examples can be classified correctly if a model can generalize better. We test both white-box adversarial and black-box adversarial attacks for TCNNs and ConvNets with different numbers of parameters.
Figure 8 shows the robustnesses of both CNNs and TCNNs against white-box adversarial attacks. We can see that TCNNs show higher robustness compared to CNNs with the same number of parameters. For example, the TCNN model with 47K parameters is more robust than the CNN model with 1.2M parameters. We also find that TCNNs are not more robust than ConvNets facing black-box adversarial attacks, this result is consistent with CapsNets [12].
## 6 Conclusions
TCNNs make neuron tensors the general units of a neural network. In comparison, the basic units of ConvNets are neuron scalars. In other words, we replace each neuron in ConvNets with a high-rank neuron tensor. We introduce the neural tensor transfomration mechanism to solve the dismatching issue between adjacent layers. TCNNs show higher efficiency compared to the classic ConvNets in three small datasets. TCNNs also show higher robustness compared to ConvNets on white-box adversarial attacks.
Figure 8: Robustness of CNNs and TCNNs with different parameter numbers.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Models & routing & Error rate(\%) & Param \# \\ \hline DCNet++ ([23]) & Dynamic (-) & 0.29 & 13.4M \\ DCNet ([23]) & Dynamic (-) & 0.25 & 11.8M \\ CapsNets ([26]) & Dynamic (1) & \(0.34_{\pm 0.03}\) & 6.8M \\ CapsNets ([26]) & Dynamic (3) & \(0.35_{\pm 0.04}\) & 6.8M \\ Atten-Caps ([4] & Attention (-) & \(0.64\) & \(\approx\) 5.3M \\ CapsNets ([12]) & EM (3) & \(0.44\) & 320K \\ TCNN & - & \(0.32_{\pm 0.03}\) & 171K \\ TCNN & - & \(0.41_{\pm 0.05}\) & 22.2K \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between TCNNs in terms of error rate on MNIST. The number in each routing type means the number of routing times.
Figure 7: TCNNs versus ConvNets on MNIST. |
2302.06245 | Calibrating a Deep Neural Network with Its Predecessors | Confidence calibration - the process to calibrate the output probability
distribution of neural networks - is essential for safety-critical applications
of such networks. Recent works verify the link between mis-calibration and
overfitting. However, early stopping, as a well-known technique to mitigate
overfitting, fails to calibrate networks. In this work, we study the limitions
of early stopping and comprehensively analyze the overfitting problem of a
network considering each individual block. We then propose a novel
regularization method, predecessor combination search (PCS), to improve
calibration by searching a combination of best-fitting block predecessors,
where block predecessors are the corresponding network blocks with weight
parameters from earlier training stages. PCS achieves the state-of-the-art
calibration performance on multiple datasets and architectures. In addition,
PCS improves model robustness under dataset distribution shift. | Linwei Tao, Minjing Dong, Daochang Liu, Changming Sun, Chang Xu | 2023-02-13T10:33:55Z | http://arxiv.org/abs/2302.06245v2 | # Calibrating a Deep Neural Network with Its Predecessors
###### Abstract
Confidence calibration - the process to calibrate the output probability distribution of neural networks - is essential for safety-critical applications of such networks. Recent works verify the link between mis-calibration and overfitting. However, early stopping, as a well-known technique to mitigate overfitting, fails to calibrate networks. In this work, we study the limitations of early stopping and comprehensively analyze the overfitting problem of a network considering each individual block. We then propose a novel regularization method, predecessor combination search (PCS), to improve calibration by searching a combination of best-fitting block predecessors, where block predecessors are the corresponding network blocks with weight parameters from earlier training stages. PCS achieves the state-of-the-art calibration performance on multiple datasets and architectures. In addition, PCS improves model robustness under dataset distribution shift.
## 1 Introduction
Deep neural networks (DNNs) have achieved great successes across a variety of domains [3], especially on classification related tasks such as object detection [12, 13, 14, 15] and image classification [16, 17, 18, 19], reaching prediction accuracy far beyond human beings. However, they still suffer from mis-calibrated predictions in the sense that the prediction probability cannot represent the ground-truth probability associated with the class label, either in an over- or under-confidence manner. This may lead to fatal problems when any safety-critical downstream tasks such as autonomous driving [1] and medical diagnosis [1] rely heavily on the prediction probability.
The underlying cause for mis-calibrated predictions is associated with the capacity of modern neural networks that makes them vulnerable to overfitting [19]. Mukhoti _et al._[14] show that overfitting in modern neural networks mostly results from the overconfidence on mis-classified samples and they empirically verify the strong connection between the overfitting issue and calibration performance. Given this observation, some regularization techniques such as weight decay [19], label smoothing [10], and data augmentation [10, 1] are introduced to improve model calibration.
Early stopping [10] is another well-known regularization method, which suspends training once the model performance stops improving on a hold out validation dataset. Mukhoti _et al._[14] conduct a series of empirical experiments and demonstrate that early stopping on training according to multiple criteria fails to yield a well-calibrated model. We mainly attribute this sub-optimal solution to the unitary strategy of conventional early stopping techniques which treat the entire network as a whole. Specifically, an early stopping technique takes a DNN as a black box without investigating the internal components, i.e., the blocks inside the network. However, the increasing depth of modern DNNs makes the optimization more challenging, which could lead to discrepancies of convergence speeds of different blocks in DNNs. Thus, any model calibration via a conventional early-stopping technique could be a sub-optimal solution.
In this paper, instead of taking a DNN as a whole, we consider a block in a DNN as the basic unit and explore the overfitting problem in each block. We empirically observe that blocks in a network overfit at different stages during training. Unlike the conventional early stopping approach [10] that stops the training of the whole network at a certain point to form a network predecessor, we propose to stop the training of each block at its own best-fitting block predecessor to improve model calibration. However, the blocks in DNNs are strongly coupled with each other, and the early stopping of an individual block independently does not ensure an optimal solution. To achieve an effective and adaptive early stopping for each block, we take into consideration all possible block predecessor combinations. Our objective is to discover the predecessor combination (PC) with better calibration performance. We propose a neural architecture search inspired approach, predecessor combination search (PCS) to calibrate the DNNs, which performs a differential search of the optimal block predecessor combination through a relaxation of the search space as well as a predecessors evaluation estimator.
Our contribution can be summarized as follows: **(1)** We study the overfitting problem of individual blocks empirically and show that different blocks reach their own overfitting points at different stages of training. **(2)** We propose a novel differential PCS method to search a better-calibrated model together with a sampling strategy to improve searching efficiency. **(3)** PCS achieves state-of-the-art results for both pre- and post-temperature scaling [14] on a variety of datasets and architectures via a large number of experiments. We show that PCS works well on out-of-distribution (OoD) samples by shifting the dataset from CIFAR-10 to SVHN [1] and CIFAR-10-C [1].
## 2 Related Works
Due to the mis-calibration problem in modern neural networks [14] and the significant importance of calibration, many techniques [20, 1] have been proposed in recent years. The current calibration methods could be divided into three categories. The first category modifies the training loss by replacing the conventionally used cross-entropy loss with a mean square error loss [15] or a focal loss [16] or by adding an auxiliary regularization loss such as the MMCE loss [17] and the AvUC loss [18]. Bohdal _et al._[1] propose a differentiable surrogate for expected calibration error that improves the calibration performance directly. The recent work in [17] introduces a differentiable bin membership function and applies it on bin-based metrics such as expected calibration error (ECE) [14] to make it become a differentiable auxiliary calibration loss.
Another category is the post-hoc calibration approaches that improve the calibration performance by modifying the prediction logits. Platt scaling [22] learns parameters to perform a linear transformation on the original prediction logits. Isotonic regression [1] learns piece-wise functions to transform the original prediction logits. Histogram binning [1] obtains calibrated probability estimates from decision trees and naive Bayesian classifiers. Bayesian binning into quantiles (BBQ) [10] is an extension of histogram binning with Bayesian model averaging. Beta calibration [15] is proposed for binary classification and Kull _et al._[16] generalize the beta calibration method from binary classification to multi-classification with Dirichlet distributions. Wenger _et al._[1] employ a non-parametric representation using a latent Gaussian process. Among these methods, temperature scaling is the most popular post-hoc approach, which tunes the temperature parameter of the softmax function that minimizes the negative log likelihood (NLL) and does not change the prediction results. In this work, we present the calibration performance with both before and after temperature scaling.
All other regularization methods that can calibrate networks form the third category. Label smoothing [13] implicitly calibrates networks by artificially softening targets to prevent a model from overfitting to the "hard label". Mixup [12] and AugMix [1] are two popular data augmentation techniques for calibration. The mix step in data augmentation increases the generality of datasets and reduces the influence of hard samples that can easily cause over-confidence problem. Weight decay, which dominated regularization methods for neural networks in the past, is now less often used by modern neural networks. However, it still plays an important role in improving model calibration [14].
## 3 Problem Formulation
Considering a dataset \(\mathcal{D}=\langle(x_{i},y_{i})\rangle_{i=1}^{N}\) with \(N\) samples from a joint distribution \((\mathcal{X},\mathcal{Y})\), the ground-truth class label is \(y_{i}\in\{1,2,...,\mathcal{K}\}\), where \(\mathcal{K}\) denotes the number of classes. The probability for a class \(y_{i}\) on a given input \(x_{i}\) predicted by network \(F\) with model parameters \(\Theta\) is denoted as \(\hat{p}_{i,y_{i}}=F_{\Theta}(y_{i}|x_{i})\). The predicted label \(\hat{y}_{i}\) and the corresponding confidence \(\hat{p}_{i}\) are defined as
\[\hat{y}_{i} =\operatorname{argmax}_{y_{i}\in\{1,2,...,\mathcal{K}\}}\,\hat{p }_{i,y_{i}},\] \[\hat{p}_{i} =\max_{y_{i}\in\{1,2,...,\mathcal{K}\}}\,\hat{p}_{i,y_{i}}. \tag{1}\]
When the model is _perfectly calibrated_, the prediction confidence \(\hat{p}\) is expected to represent the real probability \(p\) for each sample \(x_{i}\) with class label \(y_{i}\). In other words, the model accuracy \(\mathbb{P}(\hat{y}=y|\hat{p}=p)\) is \(p\), for all \(p\in[0,1]\).
ECE is a widely-accepted metric to measure calibration performance. Formally, the ECE is defined as the expected absolute difference between the model's confidence and its accuracy, which can be formulated as
\[\mathrm{ECE}=\mathbb{E}_{\hat{p}}\big{[}\left|\mathbb{P}(\hat{y}=y|\hat{p})- \hat{p}\right|\big{]}. \tag{2}\]
Due to the finite samples in datasets, Guo _et al._[13] estimate ECE by dividing confidence \(p\in[0,1]\) into \(\mathbb{B}\) equal-width bins. \(B_{i}\) denotes the set of samples with confidences within \(\left(\frac{i-1}{\mathbb{B}},\frac{i}{\mathbb{B}}\right]\). Let \(I_{i}\) and \(C_{i}\) denote the accuracy and average confidence of all samples in bin \(B_{i}\) respectively. Accuracy of bin \(B_{i}\) is computed as \(I_{i}=\frac{1}{|B_{i}|}\sum_{j\in B_{i}}\mathbb{1}\) (\(\hat{y}_{j}=y_{j}\)), where \(\mathbb{1}\) is the indicator function, and \(\hat{y}_{j}\) and \(y_{j}\) are the predicted and ground-truth labels for the \(j^{\mathrm{th}}\) sample. Similarly, the confidence \(C_{i}\) of the \(i^{\mathrm{th}}\) bin is computed as \(C_{i}=\frac{1}{|B_{i}|}\sum_{j\in B_{i}}\hat{p}_{j}\), i.e., \(C_{i}\) is the average confidence of all samples in the bin. Thus, in practice, ECE is formulated as the weighted average of accuracy-confidence difference for the bins:
\[\mathrm{ECE}=\sum_{i=1}^{\mathbb{B}}\frac{|B_{i}|}{N}\left|I_{i}-C_{i}\right|. \tag{3}\]
Along with ECE, the maximum calibration error (MCE) is proposed to minimize the influence of worst-case confidence deviation, which is defined as the maximum difference of bins' accuracy and confidence: \(\mathrm{MCE}=\max_{i\in\mathbb{1},...,\mathbb{B}}\left|I_{i}-C_{i}\right|\).
Guo _et al._[13] and Mukhoti _et al._[1] state that the mis-calibration problem of networks is strongly related to overfitting on training sets. Early stopping, as a well-known regularization technique, mitigates the overfitting problem by
stopping the whole network at the best-fitting epoch. According to previous observations, early stopping should yield a well calibrated model. However, the calibration performance of an early stopped model is often far from satisfactory. Thus, we hypothesize that shallow and deep blocks may suffer from different degrees of overfitting during optimization and a unified regularization criteria to the entire network \(F_{\Theta}\) could lead to a sub-optimal solution for model calibration. Instead, it would be more desirable to resolve the overfitting problems block-wise and explore an adaptive regularization approach to improve the model calibration performance.
### Overfitting in Blocks
Most advanced CNN model architectures are the stack of blocks [10]. Suppose there are \(M\) blocks in the network, the model parameters can be represented as \(\Theta=\{\theta_{i}|i=1,2,\dots,M\}\), where \(\theta_{i}\) denotes the parameter of the \(i^{\text{th}}\) block. \(\Theta^{j}\) and \(\theta^{j}\) are used to identify the network and block parameters at the \(j^{\text{th}}\) training epoch. The degree of fitting of \(F_{\Theta}\) is represented by the trend of validation NLL loss \(\mathcal{L}_{NLL}(F_{\Theta^{j}}(x),y),j=1,2,...,T_{train}\), where \(T_{train}\) is the number of total training epochs over the training process. Conventional early stopping aims to find the weights at a "sweet point"1 epoch to balance between underfitting and overfitting. However, there is no definition of overfitting of individual network blocks. In order to find the "sweet point" epoch of each block, we need to measure the degree of overfitting of an individual block \(f_{i}\). We formally define the _pre-decessors_ of block \(f_{i}\) as \(\{f_{i}^{j}|j\in\{1,2,\dots,T_{train}\}\}\), where \(f_{i}^{j}\) denotes the \(i^{\text{th}}\) block with weight at the \(j^{\text{th}}\) epoch. To achieve the overfitting measurement of \(f_{i}\), we treat the other blocks of the network \(\tilde{F}_{i}=\{f_{i^{-}}|i^{-}\in\{1,2,\dots,M\}\setminus\{i\}\}\) as constant mappings. Then the overfitting degree of block \(f_{i}\) could be represented by the trend of validation NLL loss over the training process \(\mathcal{L}_{NLL}(F(x),y;f_{i}^{j}),j=1,2,...,T_{train}\), with the weights of other network blocks \(\tilde{F}_{i}\) fixed. Based on this idea, an empirical study is conducted to investigate the overfitting behaviors of individual network blocks. A more formal definition of the overfitting measurement of an individual network block is given in the supplementary.
Footnote 1: A sweet point is defined as the balance point between underfitting and overfitting, which normally refers to the lowest point of the validation loss curve.
We present the empirical study in Figure 1, demonstrating the different overfitting behaviors of individual network blocks. A ResNet-50 is first trained on CIFAR-10 for \(T_{train}=350\) epochs. Then each sub-figure shows the evaluation of overfitting behavior of block \(f_{i}\) and each block other than \(f_{i}\) is fixed to a certain predecessor. Note that it is non-trivial to properly select the fixed predecessors for all other blocks \(\tilde{F}_{i}\) and ideally these blocks need to be at their own "sweet points". We adopt a simplified approach and fix \(\tilde{F}_{i}\) at the "sweet point" of the whole model as an approximation, i.e., \(\tilde{F}_{i}=\{f_{i^{-}}^{\rho^{*}}|i^{-}\in\{1,2,\dots,M\}\setminus\{i\}\}\), where \(\rho^{*}\) is obtained either by early stopping the whole model based on the validation loss or classification error. Apart from the validation loss, the validation ECE and classification error are also plotted in Figure 1. The proportionally scaled values of the validation loss and ECE are reported for better visualisation. In the supplementary, another set of evaluation results are reported, where the other blocks are fixed to the weight parameters of the model at the end of training, i.e., \(\rho^{*}=T_{train}\). Through the variation of the validation loss of each block \(f_{i}\) in this experiment as shown in Figure 1, we have the following intriguing observations on overfitting in blocks.
**1. Deeper convolutional blocks tend to have more severe overfitting problems with training.** From the first four columns in Figure 1, we observe that all convolutional blocks with the same block index share a similar overfitting pattern for both settings. However, when we look into each row, a different pattern shows on the convolutional blocks compared with each other. The deeper convolutional block, i.e., \(f_{4}\)
Figure 1: **Empirical evidence on that different blocks have different overfitting behaviors. A ResNet-50 on CIFAR-10 is first trained with 350 epochs with the cross-entropy loss and learning rate scheduling at epochs 150 and 250. ResNet-50 has 5 blocks, i.e., 4 convolutional blocks and 1 final linear layer. Each sub-figure shows the overfitting issue of block \(f_{i}\) through the training epochs with \(\tilde{F}_{i}\) fixed at potential “sweet point” predecessors. Top row and bottom row represent \(\tilde{F}_{i}\) fixed at different predecessors, which are the “sweet point” epoch of the whole model by early stopping on loss (epoch 151) and error (epoch 281) respectively. Columns represent different block \(f_{i}\) under study. All combinations are fine-tuned with one epoch on the training set.**
starts rapid overfitting after the first learning rate scheduler point.
**2. The overfitting problem of the final linear layer is much more complex and depends heavily on other blocks.** When it comes to the final linear block \(f_{5}\) at the last column, the loss presents a totally reverse pattern with different \(\tilde{F}_{i}\). The final block \(f_{5}\) with \(\tilde{F}_{i}\) fixed at predecessors early stopping on loss has a slight overfitting problem from epoch \(50\) to the first learning rate scheduler point at epoch \(150\), while \(f_{5}\) with \(\tilde{F}_{i}\) fixed at predecessors early stopping on classification error continues to overfit from the very first epoch until a few epochs after the first learning rate scheduler point while the error remains at the same level.
**3. Mis-calibration is linked with block overfitting.** When we look at the variation of ECE and validation loss, we observe that the overfitting trend of an individual block is consistent with the variation of ECE, which further extends the link between mis-calibration and overfitting at block-wise level.
According to these observations, it could be very difficult to identify the ideal PC due to the inter-dependency among blocks (the linear blocks in the two rows have totally different behaviours). In other words, simply finding the best-fitting blocks individually and combining them together without taking other blocks into account may not produce a best-fitting model. This motivates us to explore an automatic searching algorithm to find a group of predecessors at "sweet points" that allows \(F^{*}=\{f_{i}^{\rho_{i}^{i}}|i=1,2,\ldots,M\}\) to best-fit to the training set. Here, \(\rho_{i}^{i}\) denotes the optimal block predecessor choice of block \(f_{i}\).
## 4 Methodology
Based on the observation on Figure 1, we propose to explore a PC representation \(\mathcal{P}=\{\rho_{i}|i=1,2,\ldots,M\}\) to tackle the overfitting issue in blocks and improve model calibration performance. The objective can be formulated as
\[\min_{\mathcal{P}}\left(ERR(F)+\lambda\cdot ECE(F)\right) \tag{4}\]
where \(F\) is \(\{f_{i}^{\rho_{i}}|i=1,2,\ldots,M\}\), \(\lambda\) denotes a hyperparameter, \(\rho_{i}\in\{1,2,\ldots,T_{train}\}\) indicates the predecessor selection of block \(f_{i}\), \(ERR\) and \(ECE\) are the classification error and ECE respectively. The direct optimization of Eq. (4) is not feasible via gradient descent due to two issues. First, the discrete representation \(\mathcal{P}\) makes Eq. (4) an optimization problem over a discrete domain. Second, both terms \(ERR\) and \(ECE\) in our objective are not differentiable. Thus, we introduce a PCS framework to tackle the aforementioned issues. Specifically, we first introduce differentiable combination sampling through a continuous relaxation of the PC representation. Proxy classification error and ECE landscape are then introduced to achieve differential optimization of our objective through an estimator for predicting the classification error and ECE.
### Differentiable Combination Sampling
In our PCS framework, the objective is to discover the optimal selection of predecessors from candidate sets for each block. To learn the selection, we first exploit a \(K\)-dimensional trainable parameter \(\alpha_{i}\in\mathbb{R}^{K}\) for this predecessor selection, where \(K\) denotes the number of candidates. And \(\rho_{i}\) can be obtained by the argmax of the selection parameter \(\alpha_{i}\) and further represented in a one-hot encoding format \(\rho_{i}\in\mathbb{R}^{K}\). The PC representation \(\mathcal{P}\) can be written as:
\[\mathcal{P}=\{\rho_{i}|i=1,2,\ldots,M\},\] \[\text{s.t.}\quad\rho_{i}=\mathrm{one}\mathrm{-hot}(\mathrm{argmax }\,\alpha_{i}). \tag{5}\]
To relax the discrete PC representation for gradient-based optimization, we use the Gumbel-Softmax trick [1] to approximate the one-hot distribution and introduce randomness. The \(\rho_{i}\) in Eq. (5) can be relaxed as:
\[\tilde{\rho}_{i}^{k}=\frac{\exp((\alpha_{i}^{k}+\xi_{i}^{k})/\tau)}{\sum_{k^{ \prime}=1}^{K}\exp((\alpha_{i}^{k^{\prime}}+\xi_{i}^{k^{\prime}})/\tau)}, \tag{6}\]
where \(\tau\) is the temperature parameter, \(\xi_{i}^{k}\) is an i.i.d sample from Gumbel(0, 1), \(k\) and \(k^{\prime}\) denote the \(k^{\text{th}}\) and \(k^{\prime\text{th}}\) logit of corresponding \(K\)-dimensional vector respectively. We denote the relaxed PC representation as \(\tilde{\mathcal{P}}=\{\tilde{\rho}_{i}|i=1,2,\ldots,M\}\). With the learning of \(\alpha_{i}\), \(\tilde{\mathcal{P}}\) explores the random combination at the beginning of search and gradually converges to a relatively stable state.
For simplicity, we denote the model with \(\tilde{\mathcal{P}}\) as \(F_{\tilde{\mathcal{P}}}\). The classification error and ECE on the validation set can be denoted as \(ERR(\mathcal{D}_{val},F_{\tilde{\mathcal{P}}})\) and \(ECE(\mathcal{D}_{val},F_{\tilde{\mathcal{P}}})\) respectively. Since the combination is hard-combined, we fine-tune \(F_{\tilde{\mathcal{P}}}\) with one more epoch on \(\mathcal{D}_{train}\), denoted as \(F_{\tilde{\mathcal{P}}}^{*}\). In PCS, the original objective (Eq. (4)) can be reformulated as:
\[\min_{A}\left(ERR(\mathcal{D}_{val},F_{\tilde{\mathcal{P}}}^{*})+\lambda\cdot ECE (\mathcal{D}_{val},F_{\tilde{\mathcal{P}}}^{*})\right), \tag{7}\]
where \(A=\{\alpha_{i}|i=1,2,\ldots,M\}\) is the collection of learnable selection parameters for all blocks.
### Proxy Classification Error and ECE Landscape
For differential optimization of PC representation \(\tilde{\mathcal{P}}\), we use a trainable estimator \(\psi\) to obtain proxies \(E\hat{R}R\), \(E\hat{C}E=\psi(\tilde{\mathcal{P}})\) to approximate the classification error and ECE. Since the input \(\tilde{\mathcal{P}}\) is a sequential data, we utilize a one-layer long short-term memory (LSTM) to build the estimator \(\psi\): \(\mathbb{R}^{M\times K}\rightarrow\mathbb{R}^{d}\) mapping \(\tilde{\mathcal{P}}\) to a \(d\)-dimensional embedding vector and a linear layer: \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{2}\) outputting \(E\hat{R}R\) and \(E\hat{C}E\). The estimator \(\psi\) is trained with a weighted mean squared error loss function:
\[\min_{\psi}L(\psi)= \frac{1}{T_{se}}\sum_{t=1}^{T_{se}}(E\hat{R}R^{(t)}-ERR^{(t)}( \mathcal{D}_{val},F_{\tilde{\mathcal{P}}}^{*}))^{2}\] \[+ \gamma(E\hat{C}E^{(t)}-ECE^{(t)}(\mathcal{D}_{val},F_{\tilde{ \mathcal{P}}}^{*}))^{2}, \tag{8}\]
where \(\gamma\) is a hyperparameter to control the loss ratio of ECE, \(T_{se}\) is the total searching steps and the superscript \((t)\) indicates the evaluation results at the \(t^{\text{th}}\) time step. All pairs
of \(\tilde{\mathcal{P}}\) and its corresponding classification error and ECE are stored in a memory \(\Pi\) to optimize estimator \(\psi\). After each searching step \(t\), memory \(\Pi\) is updated by \(\Pi=\Pi\cup\{(\tilde{\mathcal{P}}^{(t)}:(ECE^{(t)},ERR^{(t)}))\}\). We can then use the optimized estimator \(\psi^{*}\) to reformulate PCS objective Eq. (7):
\[\begin{split}&\min_{A}(E\hat{R}R^{{}^{*}}+\lambda E\hat{C}E^{{}^{*} }),\\ &\text{where }E\hat{R}R^{{}^{*}},E\hat{C}E^{{}^{*}}=\psi^{*}( \tilde{\mathcal{P}}).\end{split} \tag{9}\]
The gradients of \(E\hat{R}R^{{}^{*}}\) and \(E\hat{C}E^{{}^{*}}\) can be used to optimize \(\tilde{\mathcal{P}}\) and thus \(A\):
\[A^{\prime}\gets A-\eta\cdot\nabla_{A}(E\hat{R}R^{{}^{*}}+\lambda E\hat{C}E ^{{}^{*}}), \tag{10}\]
where \(A^{\prime}\) is the new predecessor selection parameter and \(\eta\) is the learning rate. At the next searching time step, the corresponding \(\tilde{\mathcal{P}}^{\prime}\) is based on \(A^{\prime}\), and memory \(\Pi\) is updated to \(\Pi=\Pi\cup\{(\tilde{\mathcal{P}}^{\prime}:(ECE^{{}^{\prime}},ERR^{{}^{\prime} })\}\).
**Remark: Search Procedure** We first train model \(F_{\Theta}\) with \(T_{train}\) epochs and randomly store weight parameters at \(K\) different epochs. After that, we randomly initialize a warm-up population \(\mathbb{H}\) of size \(\mathcal{S}\) and evaluate the classification error \(ERR^{(t)}(\mathcal{D}_{val},F^{*}_{\tilde{\mathcal{P}}_{i}})\) and ECE \(ECE^{(t)}(\mathcal{D}_{val},F^{*}_{\tilde{\mathcal{P}}_{i}})\) of \(\tilde{\mathcal{P}}_{i}\in\mathbb{H}\). The combination-performance pairs are then stored to memory \(\Pi\), which is used to warm up estimator \(\psi\) and equip \(\psi\) with prior knowledge about classification error and ECE before searching. After the initial training of model and warming up of \(\psi\), the searching procedure is conducted with \(T_{se}\) steps. In each step, one PC representation \(\tilde{\mathcal{P}}^{(t)}\) is sampled based on current \(A^{(t)}\). The corresponding \(F_{\tilde{\mathcal{P}}^{(t)}}\) is then fine-tuned with one epoch and evaluated to obtain a validation error and ECE for the training of \(\psi\). \(A^{(t)}\) is then optimized with the proxy classification error \(E\hat{R}R\) and proxy ECE \(E\hat{C}E\). The detailed algorithm is provided in supplementary.
**Remark: Search Space Sampling Strategy** Each selection \(\rho\) is an integer between \(1\) and training epoch \(T_{train}\), and thus the candidate size \(K\) is \(T_{train}\). However, storing weight parameters over all training epochs can be very storage-intensive. For instance, storing all candidates of a ResNet-50 training with 350 epochs can take up to 33GB. Considering that the model parameters of adjacent epochs are similar, especially at late training stages, we propose four sampling strategies to reduce the size of search space and improve storage efficiency, which are (1) _Random Sampling_, randomly sampling \(K\) different epochs ranging from \(1\) to \(T_{train}\); (2) _Uniform Sampling_, uniformly sampling \(K\) epochs ranging from \(1\) to \(T_{train}\); (3) _Laplace Sampling_, due to model parameters changing much faster at earlier epochs, sampling \(K\) epochs ranging from \(1\) to \(T_{train}\) with a Laplace distribution centering at 0; (4) _Piece-Wise Laplace Sampling_, due to model parameters changing much faster at earlier epochs of each learning schedule, sampling \(K\) epochs ranging from \(1\) to \(T_{train}\) with multiple Laplace distributions centering at 0 and other learning schedule epochs (150 and 250 in our case). Since the searching difficulty grows exponentially with the size of \(K\), another benefit of small candidate size is that it improves the searching efficiency.
## 5 Experiments
### Experimental Settings
**Datasets** We conduct experiments on various datasets, including CIFAR-10/100 [11] and TinyImageNet [14] to evaluate the calibration performance. We also include the robustness evaluation on Out-of-Distribution (OoD) datasets, including SVHN [15] and CIFAR-10-C [1]. The details about datasets can be found in supplementary.
**Baselines** To verify the effectiveness of our proposed algorithm, we include different networks for evaluation, including ResNet-50, ResNet-110 [1], Wide-ResNet-26-10 [23] and DenseNet-121 [10], and compare with various approaches, including training with weight decay at \(5\times 10^{-4}\) (we find that weight decay at \(5\times 10^{-4}\) performs the best among multiple values), Brier Loss [1], MMCE loss [21], Label smoothing [23] with a smoothing factor \(\alpha_{LS}=0.05\), focal loss [11] with regularisation parameter \(\gamma_{focal}=3\), and scheduled focal loss FLSD-53 [12] which uses \(\gamma_{focal}=5\) for \(\hat{p}\in[0,0.2)\) and \(\gamma_{focal}=3\) for \(\hat{p}\in[0.2,1)\).
**Other Calibration Metrics** Recent works [11, 10, 12] point out the defects of ECE. To evaluate our method comprehensively, we evaluate our method on three additional calibration metrics, i.e., MCE, Adaptive-ECE [14] and classwise-ECE [13] along with ECE. We also measure PCS with reliability plots. The details about additional calibration metrics and additional experimental results can be found in supplementary.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Weight Decay**} & \multicolumn{2}{c}{**Brier Loss**} & \multicolumn{2}{c}{**MMCE**} & \multicolumn{2}{c}{**Label Smoothing**} & \multicolumn{2}{c}{**FL-3**} & \multicolumn{2}{c}{**FLSD-53**} & \multicolumn{2}{c}{**PICS-5**} \\ & & & Guo _et al._[11] & Brier _et al._[14] & Kumar _et al._[14] & Szegedy _et al._[14] & Mathieu _et al._[13] & Mathieu _et al._[13] & Ours \\ & & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T \\ \hline \multirow{4}{*}{CIFAR-100} & ResNet-50 & 17.52 & 3.42(2).1 & 6.52 & 3.64(1.1) & 15.32 & 2.38(1.8) & 7.81 & 4.09(1.1) & 5.13 & 1.97(1.1) & 4.5 & 2.0(1.1) & **2.0** & **2.0(1.0)** \\ & ResNet-110 & 19.05 & 4.43(2.3) & 7.88 & 4.65(1.1) & 19.44 & 3.86(2.3) & 11.60 & 5.89(1.1) & 8.64 & 3.95(1.1) & 2.56 & 4.1(2.1) & **1.76** & **1.6(1.0)** \\ & Wide-ResNet-260 & 15.35 & 28.28(2.8) & 4.31 & 2.7(1.1) & 13.77 & 4.84 & 4.84(1.0) & 1.23 & 1.25(1.0) & 3.03 & 1.64(1.1) & **1.92** & **1.5(1.5)** \\ & Dense-121 & 20.98 & 4.27(2.3) & 5.17 & 2.28(1.1) & 19.13 & 3.03(2.1) & 12.89 & 7.52(1.2) & 4.15 & 1.25(1.1) & 3.73 & 1.31(1.1) & **2.75** & **1.18(1.0)** \\ \hline \multirow{4}{*}{CIFAR-10} & ResNet-50 & 4.35 & 1.35(2.5) & 1.82 & 1.08(1.1) & 4.56 & 1.19(2.6) & 2.96 & 1.67(0.9) & 1.48 & 1.42(1.1) & 1.55 & 0.95(1.1) & **0.80** & **0.53(1.0)** \\ & ResNet-110 & 4.100 & 10.0(2.8) & 2.56 & 1.28(1.2) & 5.08 & 1.42(2.8) & 2.09 & 2.09 & 1.00(1.0) & 1.55 & 1.02(1.1) & 1.87 & 1.09(1.1) & **0.57** & **0.57(1.0)** \\ \cline{1-1} \cline{2-11} & Wide-ResNet-26-10 & 3.23 & 6.92(2.2) & 1.25 & 1.25(1.0) & 3.29 & 8.06(2.4) & 2.46 & 1.84(0.9) & 1.69 & 0.97(0.9) & 1.56 & 0.84(0.9) & **0.99** & **0.4(1.2)** \\ \cline{1-1} \cline{2-11} & DenseNet & ResNet-50 & 12.11 & 4.52 & 1.31(2.4) & 1.53 (1.0) & 5.13(1.0) & 5.16(2.5) & 1.88 & 1.52(0.9) & 1.32 & 1.25(0.9) & 1.22 & 1.2(1.0) & **0.78** & **0.79(1.0)** \\ \hline Tiny-ImageNet & ResNet-50 & 15.52 & 5.48(1.4) & 4.44 &
Training SetupFor training on CIFAR-10/100, we set \(T_{train}=350\). The learning rate is set to \(0.1\) for epoch \(0\) to \(150\), \(0.01\) for \(150\) to \(250\), and \(0.001\) for \(250\) until the end of training. For training on Tiny-ImageNet, we set \(T_{train}=100\). We follow the same training and validation set split setting as Mukhoti _et al._[2020]. The learning rate is set to \(0.1\) for epoch \(0\) to \(40\), \(0.01\) for epoch \(40\) to \(60\), and \(0.001\) for \(60\) until the end of training. The fine-tuning learning rate is set to \(10^{-4}\) for CIFAR-10, \(5\times 10^{-4}\) for CIFAR-100, and \(10^{-3}\) for Tiny-ImageNet. The searching process is performed with \(T_{se}=100\) steps. The population size is \(\mathcal{S}=100\). Experiments are conducted with ResNet-50 on CIFAR-10 if there is no other specification. All networks are optimized using the SGD optimizer with a weight decay at \(5\times 10^{-4}\) and a momentum of 0.9. The training batch size is set to 128. All experiments are conducted on a single Tesla V-100 GPU with all random seeds set to 1. Our code and results of comparison method are based on the public code and the pre-trained weight provided by Mukhoti _et al._[2020]. Other hyperparameters and settings can be found in supplementary.
**Temperature Scaling** Following the setting in the prior work [Mukhoti _et al._, 2020], the temperature parameter \(\tau\) is optimized by grid searching with \(\tau\in[0,0.1,0.2,\dots,10]\) on the validation set and finding the one with the best post-temperature-scaling ECE, which is also applied on the additional calibration metrics.
### Calibration Performance
We report ECE\((\%)\) (computed using 15 bins) along with optimal temperatures in Table 1. PCS achieves the state-of-the-art ECE across all models and datasets and outperforms previous works by large margins, especially pre-temperature-scaling results. More specifically, most PCS pre-temperature-scaling results have already substantially exceeded the post-temperature-scaling results of previous works. The result of ResNet-110 on CIFAR-100 achieves the best calibration performance compared to previous works, with a \(7\%\) decrease in ECE. For comparison approaches, the model trained with focal loss is broadly better-calibrated than other methods. However, it fails in some cases such as the evaluation on Tiny-ImageNet. In addition, the scheduled \(\gamma_{focal}\) trick does not always work better than fixed \(\gamma_{focal}\) and it is hard to ascertain which is the better between FL-3 and FL53. The MMCE auxiliary loss performs worst before temperature scaling. Another notable point of PCS is that multiple results such as those with ResNet-110 on CIFAR-10/100 achieve the innately calibrated model (T=1.0), which means that the PC itself has already yielded a well-calibrated model without temperature scaling.
We also evaluate PCS on other widely-accepted metrics including Adaptive ECE, Classwise-ECE and test set error. PCS also achieves the state-of-the-art calibration results on almost all cases. The test set error on Tiny-ImageNet shows a \(8.52\%\) decrease from \(49.81\%\) to \(41.29\%\), which is mainly because of PCS performing as an early stopping trick. More results are reported in the supplementary.
### Robustness on Out-of-Distribution(OoD) Datasets
A well-calibrated model helps improve the model robustness on OoD datasets [Thulasidasan _et al._, 2019]. However, temperature scaling is known to be fragile under dataset distribution shift [Ovadia _et al._, 2019]. PCS form innately calibrated models and thus perform well on OoD datasets. We utilize AUROC (the higher the better) to evaluate the robustness under dataset shift. Table 2 shows the AUROC (\(\%\)) computed for models trained on CIFAR-10 and tested on the OoD datasets SVHN and CIFAR-10-C. Our method achieves competitive results on almost all cases. The results after temperature scaling tend to drop generally, and approaches yielding better pre-temperature-scaling ECE have better robustness on OoD datasets. Although focal loss works well on calibration, it fails under dataset shift. Our PCS with ResNet-50 on CIFAR-10 achieves a \(4\%\) increase compared to previous methods.
### Searching Results
We visualize the searching results of ResNet-50 on CIFAR-10 in Figure 2. The searching results are obtained based on well-fitting (test set loss \(<\) 2) PCs. The result shows that the last convolutional block (Conv Block 4) tends to choose predecessors in the first half of training (epoch \(50\) to \(180\)) while Conv Block 2 and Conv Block 3 prefer predecessors in the second half (epoch \(150\) to \(350\)). This observation might indicate that the later Conv Blocks tend to overfit earlier than the former ones. The first Conv Block and the final linear block show no particular preference to certain predecessors.
This result is consistent with the evaluation of overfitting of individual blocks in Figure 1. The Conv Block 1 suffers from little overfitting throughout training and thus has no preference to certain predecessors, while the middle blocks (Conv Block 2, 3, 4) prefer predecessors with a low validation loss as shown in Figure 1. We also visualize the searching results of other models and observe the similar pattern. The visualization is reported in supplementary.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Weight Deces**} & \multirow{2}{*}{**Brier Loss**} & \multirow{2}{*}{**MMCE**} & \multirow{2}{*}{**Label Smoothing**} & \multirow{2}{*}{**FL-3**} & \multirow{2}{*}{**FIND-53**} & \multirow{2}{*}{**PCS**} \\ & & & Gros _et al._[2017] & Blavier _et al._[1950] & Kumar _et al._[2018] & Szegedy _et al._[2016] & Madhotri _et al._[2020] & Matholou _et al._[2020] & & Ours \\ & & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T & Pre T & Post T \\ \hline \multirow{4}{*}{CIFAR-10SVHN} & ResNe-50 & 94.92 & 94.56 & 94.59 & 94.27 & 81.77 & 64.75 & 78.88 & 78.39 & 88.28 & 88.42 & 92.79 & **81.12** & **97.04** \\ & ResNet-110 & 61.71 & 59.96 & 94.03 & 95.13 & 85.31 & 85.39 & 85.39 & 86.68 & 86.74 & **86.92** & 90.83 & 90.97 & **96.77** & 96.77 \\ & Wide-ResNet-26-10 & 96.82 & 97.62 & 94.51 & 94.51 & 97.35 & 97.95 & 84.63 & 84.66 & 98.19 & 80.58 & **98.29** & **98.20** & 97.55 & 97.84 \\ & DenseNet-121 & 84.48 & 81.57 & 94.65 & 84.66 & 83.58 & 84.87 & 78.79 & 89.84 & 89.42 & 89.59 & 89.59 & **70.27** & **97.2** \\ \hline \multirow{4}{*}{CIFAR-10CEAR-10-C} & ResNe-50 & 86.23 & 86.03 & **90.21** & **90.33** & 89.97 & 90.11 & 72.02 & 90.44 & 89.56 & 89.45 & 89.56 & 97.33 & 89.79 \\ & ResNet-110 & 77.53 & 75.16 & 84.09 & 83.86 & 71.96 & 70.02 & 72.17 & 72.18 & 82.27 & 81.85 & 85.05 & 84.70 & **88.1** & **88.27** \\ \cline{1-1} & Wide-ResNet-26-10 & 81.06 & 80.68 & 85.03 & 85.05 & 82.17 & 81.72 & 71.10 & 71.16 & 82.17 & 81.86 & 87.05 & 87.30 & **89.62** & **89.95** \\ \cline{1-1} & DenseNet-121 & 87.61 & 86.41 & 87.38 & 87.38 & 84.9 & 84.88 & 73.67 & 73.8 & 87.12 & 87.53 & 89.47 & **89.47** & **89.52** & **89.52** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Robustness on Dataset Shift.** AUROC \((\%)\), being the higher the better, is evaluated for different methods with models shifting from CIFAR-10 (in-distribution) to SVHN and CIFAR-10-C as the OoD datasets.
### Ablation Study
Comparison with Other Searching MethodsIn Table 3, our method is compared with different searching methods as well as early stopping methods. When early stopping on the model as a whole, it is hard to ensure a low error and good calibration performance at the same time. Early stopping on loss and ECE shows a large performance drop on classification error. The random search is conducted 5 times to make the results stable and achieve a relatively high performance on classification error but not ECE. We also compare multiple objectives in Eq. (9) with the performance of a single objective optimization on validation NLL loss, i.e.,\(\min_{A}N\hat{L}L\). Searching on ECE or error individually could lead to extremely unbalanced results. PCS achieves a better result on both test set error and ECE. The hyperparameter \(\lambda\) depends on models and datasets, and we use \(\lambda=25\) for ResNet-50 on CIFAR-10. We tune \(\lambda\) with multiple values and find out the suitable \(\lambda\) for each setting. The settings for each model are given in the supplementary.
Weight Sampling StrategyTo compare sampling strategies discussed in the previous section, a scatter plot in Figure 3 shows the searching results. Note that the "Full Sampling" indicates searching on all possible predecessors without sampling, i.e., \(K=T_{train}\). We use the same \(K=50\) for all sampling strategies. From Figure 3, we observe little difference between different sampling strategies, which indicates that we save storage space with little loss of performance. Thus, we use the random sampling strategy throughout the paper due to its simplicity.
Warm-up PopulationA larger population indicates more searching time. To find a balance between searching time and performance, we compare the searching results with different population sizes. We use the number of well-fitting results (test loss under 0.2) to measure the searching performance. All experimental results are averaged over 5 runs. According to Table 4, the larger the population size, the more well-fitting results can be found since estimator \(\psi\) can have better prior knowledge on error and ECE landscape. However, we use a smaller population size as long as the searching provides satisfactory results. Throughout the paper, we use the population size \(\mathcal{S}=100\).
## 6 Conclusion
In this paper, we address a common problem, the miscalibration in modern neural networks. We observe that different blocks in a network have different overfitting patterns. Our proposed predecessor combination search, as a regularization method, is very effective for calibrating models and can also be potentially applied to other tasks such as learning with noisy labels and improving model robustness.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Population Size** & **64** & **100** & **200** & **500** \\ \hline
**GPU Hours** & 3.1 & 3.7 & 5.5 & 10.5 \\
**Well-fitting Results** & 14.8 & 16 & 17.8 & 25.8 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Warm-up Population**. Well-fitting results indicate the number of searching results that achieve a testing loss under 0.2, being the higher the better. All results are reported as an average of 5 runs with ResNet-50 on CIFAR-10.
Figure 3: **Comparison of Sampling Strategy**. All experiments are conduct 5 runs with ResNet-50 on CIFAR-10. Searching step \(T_{se}\) is set to \(100\) and produces 500 searching results for each sampling strategy. Metrics along the x-axis and y-axis are the lower the better.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Methods** & **ERR** & **ECE** & **ECE(T)** \\ \hline
**Early Stopping (Loss)** & 6.58 & 2.32 & 0.56(1.4) \\
**Early Stopping (Error)** & 4.89 & 4.03 & 1.42(2.4) \\
**Early Stopping (ECE)** & 16.92 & 1.71 & 1.09(1.2) \\
**Random Search** & 5.04 & 1.13 & 0.84(1.1) \\
**Search on Loss** & 5.43 & 1.1 & 0.37(1.10) \\
**PCS (\(\lambda\)=10)** & 5.31 & 0.86 & 0.73(1.10) \\
**PCS (\(\lambda\)=25)** & 5.1 & 0.75 & 0.37(1.10) \\
**PCS (\(\lambda\)=50)** & 5.25 & 0.58 & 0.58(1.0) \\
**PCS (\(\lambda\)=100)** & 5.29 & 0.70 & 0.44(1.10) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison of Searching Methods**. Random search is conducted 5 times with ResNet-50 on CIFAR-10. All searching results are selected by the lowest testing loss.
Figure 2: **Searching Results**. The sub-figures show the predecessor choice frequency of different blocks in ResNet-50. The search results are filtered by cross-entropy loss (being less than 0.2) on the test set. |
2303.11376 | GNN-Ensemble: Towards Random Decision Graph Neural Networks | Graph Neural Networks (GNNs) have enjoyed wide spread applications in
graph-structured data. However, existing graph based applications commonly lack
annotated data. GNNs are required to learn latent patterns from a limited
amount of training data to perform inferences on a vast amount of test data.
The increased complexity of GNNs, as well as a single point of model parameter
initialization, usually lead to overfitting and sub-optimal performance. In
addition, it is known that GNNs are vulnerable to adversarial attacks. In this
paper, we push one step forward on the ensemble learning of GNNs with improved
accuracy, generalization, and adversarial robustness. Following the principles
of stochastic modeling, we propose a new method called GNN-Ensemble to
construct an ensemble of random decision graph neural networks whose capacity
can be arbitrarily expanded for improvement in performance. The essence of the
method is to build multiple GNNs in randomly selected substructures in the
topological space and subfeatures in the feature space, and then combine them
for final decision making. These GNNs in different substructure and subfeature
spaces generalize their classification in complementary ways. Consequently,
their combined classification performance can be improved and overfitting on
the training data can be effectively reduced. In the meantime, we show that
GNN-Ensemble can significantly improve the adversarial robustness against
attacks on GNNs. | Wenqi Wei, Mu Qiao, Divyesh Jadav | 2023-03-20T18:24:01Z | http://arxiv.org/abs/2303.11376v1 | # GNN-Ensemble: Towards Random Decision Graph Neural Networks
###### Abstract
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data. However, existing graph based applications commonly lack annotated data. GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data. The increased complexity of GNNs, as well as a single point of model parameter initialization, usually lead to overfitting and sub-optimal performance. In addition, it is known that GNNs are vulnerable to adversarial attacks. In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, generalization, and adversarial robustness. Following the principles of stochastic modeling, we propose a new method called GNN-Ensemble to construct an ensemble of random decision graph neural networks whose capacity can be arbitrarily expanded for improvement in performance. The essence of the method is to build multiple GNNs in randomly selected substructures in the topological space and subfeatures in the feature space, and then combine them for final decision making. These GNNs in different substructure and subfeature spaces generalize their classification in complementary ways. Consequently, their combined classification performance can be improved and overfitting on the training data can be effectively reduced. In the meantime, we show that GNN-Ensemble can significantly improve the adversarial robustness against attacks on GNNs.
graph neural networks, ensemble learning, random decision making, overfitting, adversarial robustness
## I Introduction
Graph is widely used to model many real-world relationships, ranging from social and rating networks [1], financial transactions [2], healthcare [3], to protein and molecule networks [4, 5]. Graph neural network (GNN) [6, 7] has become the mainstream methodology for deep learning on graphs. GNN has shown promising performance on various applications on graph data, for example, identifying components in protein molecules by node classification, anomaly detection for financial transactions with link prediction, and interest group finding with community detection. Through GNNs, each node in the graph builds its representation based on its node feature as well as the features of its neighbors through message passing. The representation of the node is updated using an aggregation of these messages.
While GNNs can obtain reasonably good accuracy on many graph learning tasks, their increased representational capacity comes from higher model complexity. This can lead to overfitting and therefore weakens the generalization ability. In both transductive and inductive settings, graph data are typically with a small amount of labeled training data and a large amount of unlabeled test data. A trained GNN may easily overfit the small training data. In the meantime, although the current GNN models utilize the neighborhood sampling of the target node and aggregate such neighborhood information as its representation, a single trial of model parameter initialization for deep learning could converge to local optima. DropOut [8] is a popular regularization technique for deep learning models to avoid overfitting. However, in GNNs, [7] indicates that DropOut alone is ineffective in preventing overfitting. This is partially due to the fact that when GNN becomes very deep, the Laplacian smoothing will make all nodes' representations converge to a stationary point such that they are no longer related to node features [9].
Additionally, like other deep learning methods, GNNs are vulnerable to adversarial attacks [10]. If the attacker maliciously adds or drops edges concerning a target victim node, the prediction related with that node may change. For example, a malicious merchant may link her products to specific items in order to fool the graph recommendation systems for higher exposure. In another example, a malicious user may manipulate her profile to build connections with targeted users to mislead the friend recommendation system. Such kind of vulnerability poses serious threats to GNNs, especially under safety-critical scenarios.
Ensemble learning has been widely used to boost the generalization and training stability with the combined wisdom [11, 12]. Deep neural network ensembles have gained increasing popularity in the deep learning community for improving the accuracy performance and robustness [13, 14, 15]. It is equally important to improve the accuracy performance of the GNN models as well as their robustness against adversarial attacks. In this paper, we aim to shed some insight into the following research questions: (1) Can we utilize an ensemble of GNN models, despite their different local optimal results, to achieve better classification performance? (2) Can we generate GNN ensembles with better generalization power and reduce overfitting? (3) Can we increase the robustness of GNN against adversarial attacks via GNN ensemble?
In this paper, we present GNN-Ensemble, an ensemble learning approach that combines multiple GNN models trained with both randomly selected substructures in the topological
space and randomly selected subfeatures in the feature space. The development of GNN-Ensemble consists of three main steps. First, we subsample the graph structure for subgraphs and the node features for subfeatures, assuming the graph is attributed. Second, we train multiple base GNN models, e.g., GraphSage [4] with different sets of subgraphs and subfeatures for independent decision making. The third and final step is to aggregate multiple GNN decisions with ranking or voting. By the theory of stochastic modeling with a discriminant function for joint decision making [16], these GNNs in different substructure and subfeature spaces generalize their classification in complementary ways. Consequently, their combined classification can be monotonically improved and overfitting on the training data can be reduced. With experimental results on four benchmark graph datasets1 (two for small graphs Cora and PubMed, and two for large graphs PPI and Reddit), we demonstrate that the proposed GNN-Ensemble method is general and can be used with many other backbone GNN models. GNN-Ensemble improves the accuracy performance, reduces overfitting, and improves the robustness against adversarial attack when compared to a single GNN model.
Footnote 1: [https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html)
The rest of the paper is organized as follows. Section II reviews the related work in GNNs, adversarial attacks and defense on the graph, and ensemble learning. In Section III, we provide a preliminary of the problem formulation and graph neural networks. In Section IV, we present the systematic creation of the GNN-Ensemble method with graph structure and node feature sampling and a discussion on diverse model generation. We further define the discriminant function, followed by the rules on decision aggregation and complexity analysis. Experimental results are reported in Section V. We finally conclude in Section VI.
## II Related Work
**Graph Neural Networks.** GNNs apply deep learning to graph data by capturing local graph structure and feature information in a trainable fashion to derive powerful node representations [17]. Over the past few years, GNNs have achieved great success in solving machine learning problems on graph data. Under the umbrella of Graph Convolutional Networks (GCNs), there are two main families of GNNs for graph convolution, i.e., spectral methods and spatial methods. For the former, [18] uses the Fourier bases of a given graph for graph convolution. [19] utilizes Chebyshev polynomials as the convolution filter. [7] proposes GCN based on the first-order approximation of spectral graph convolution. For the latter, graph convolution aggregates and transforms local information of a given node in the spatial domain. GraphSage [4] learns aggregators by sampling and aggregating neighbor information. FastGCN [20] proposes an ad hoc sampling scheme to restrict the neighborhood size. ClusterGCN [21] restricts the neighborhood search within a subgraph identified by a graph clustering algorithm. [22] restricts neighborhood size by requiring only two support nodes in the previous layer. Graph Attention NeTwork (GAT) [23] introduces attention mechanism for neighbor information aggregation. GraphSaint [24] preprocesses the training graph by subsampling, similar to our substructure sampling for base model training. However, the subgraph sampling in GraphSaint serves as the bootstrap sampling for training a single GCN model while our subgraph sampling is used for bagging. For a thorough review, we refer readers to recent surveys [25, 26].
Overfitting reduction of GNNs is usually discussed with over-smoothing reduction [9]. DropEdge [27] removes edges uniformly as a data augmentation technique for node classification. GraphCrop [28] crops the contiguous subgraphs from the original graph to generate novel and valid augmented graphs. [29] introduces DropOut as a Bayesian approximation and uses Monte Carlo estimation of GNN outputs to evaluate the predictive posterior uncertainty. While these approaches focus on enabling deep GCN with more layers by reducing over-smoothing and gradient vanishing, our work aims to reduce overfitting on training data to improve the node classification performance of existing GNN methods. Our method is motivated by the relation between decision tree and random forests.
**Ensemble Learning.** Ensemble learning has been widely adopted to improve the accuracy, stability, and robustness of machine learning systems. The main idea of ensemble learning is to construct a set of individual machine learning models (i.e., base model) and combine their predictions to yield better performance. Ensemble learning can be categorized into three threads. The first trains tens or hundreds of weak models for decision making, such as bagging [11] and boosting [30]. For example, random forests [12] is the most influential bagging approach while XGBoost [31] is the most well-known boosting method. The dropout operation and layerwise feedforward process of neural networks benefit from such decision ensemble with improved accuracy and reduced overfitting. The second aims to identify high-quality ensembles from a pool of strong deep learning base models with high individual model accuracy and different neural network backbones [13]. Since different deep learning models converge to different local minima [32], ensembles of diverse high-quality deep neural network are typically used to improve accuracy and adversarial robustness [14, 15]. The third centers on voting methods during decision aggregation, such as simple averaging, majority voting, and plurality voting. Existing GNN ensembles leverage multiple GNN learners for domain-specific applications. [33] trains multiple GNNs on molecular graphs for molecular property prediction. [34] models the comorbidity statistics from ensembles of CNN models as a graph and uses GNN for chest radiograph screening. The most related work to ours leverages the idea of adaboost [35] for deeper graph learning. Different from all the existing works, our proposed GNN-Ensemble integrates all three threads of efforts in ensemble learning. We sample substructure and subfeature of the original graph and train hundreds of base models for decision making using the bagging method. Different voting methods can be exploited in GNN-Ensemble during
the decision aggregation phase. Our method is further open to optimizing with diverse base model selection for high-quality ensemble.
**Adversarial attacks and defenses on the graph.** GNNs are known to be vulnerable to adversarial attacks. The adversary can mislead GNNs to make wrong predictions by manipulating a few edges and nodes or modifying the node features. Adversarial attacks on the graph can be performed at the training phase as poisoning [36, 37] or at the prediction phase as evasion [38, 39, 40, 10, 41]. We refer readers to the latest survey for a comprehensive overview [42]. Since adversarial robustness is considered as an efficient tool to analyze both the theoretical properties as well as the practical accountability [43], we draw our attention to the robustness against adversarial attacks on the graph in addition to the accuracy performance. We focus on adversarial attack mitigation at the inference phase.
## III Preliminary
Our method draws the theoretical foundation from random forests [16]. The discriminant power from stochastic modeling in random forests seeks to solve a classification problem in a given feature space. Therefore, we need to identify the information needed for constructing the base model. Then we provide an overview of how a single base GNN model utilizes these information for graph learning.
### _Graph Information_
**Two types of graph information.** Given a graph \(G=(V,E)\) with the node set \(V\) and the edge set \(E\), an attributed graph contains two types of information, i.e., structure information which describes the connectivity of nodes, and node feature information which describes the attributes of nodes. The edge connections can be represented by an adjacency matrix: \(A\in\{0,1\}^{n\times n}\), where \(n=|V|\), the total number of nodes. Let \(d\) denote the dimension of node features. The node feature matrix is denoted as \(F\in\mathbb{R}^{n\times d}\). **Figure 1** illustrates the two types of graph information in an academic collaboration graph. On the left side, each author is represented as a node, and the collaboration relationship is modeled as an edge. The table on the right side lists the attributes of each author, such as the number of published papers, number of citations, and research interests.
**Node classification.** We focus on the task of node classification in this paper. Given a graph with some labeled nodes, the classification task predicts the class for those unlabeled nodes. There are two kinds of settings in node classifications, i.e., transductive and inductive settings.
* The transductive setting observes all the data beforehand, including both the training and test datasets. We learn from the already labeled training data and then predict the node labels of the test data. Even though we do not know the labels of the test data, we can still leverage the graph connections and additional information present in the test data during the training process.
* The inductive setting is about learning general rules from observed training data. The test data is disjoint from the training data and unseen during the training process. The classification model is trained with only the training dataset. The trained model is then applied to predict the labels of a test dataset.
### _Graph Neural Networks_
Given a graph \(G=(V,E)\), a GNN takes the following two types of information as input: (1) an input feature matrix \(F\), and (2) an adjacency matrix \(A\). The forward propagation rule in GNNs determines how the information from the input will go to the output side of the neural network. We will cover the commonly used shallow-layer GNNs, which typically have three layers: the input layer, the hidden layer for latent representation, and the output layer. Given a node \(v\), its representation at each layer can be modeled as:
\[h_{v}^{l}=\sigma\bigg{(}\mathcal{W}_{l}\sum\nolimits_{u\in\eta(v)}\frac{h_{u }^{l-1}}{\eta(v)}+\mathcal{B}_{l}h_{v}^{l-1}\bigg{)}.\]
where \(l=1,..L\) is the layer number, \(\sigma\) is the ReLU activation function, and \(\eta(v)\) is the neighborhoods of node \(v\). \(\mathcal{W}\) and \(\mathcal{B}\) are the weight and bias of the GNN model, respectively. The
Fig. 1: Topological and node attribute information.
first part on the right side averages all the neighbors of node \(v\) and the second part is the previous layer embedding of node \(v\) multiplied by a bias \(\mathcal{B}_{l}\). Then the output will be the embedding after \(L\) layers of neighborhood aggregation. GraphSage [4] modifies the function of this simple neighborhood aggregation as
\[h_{v}^{l}=\sigma\bigg{(}\big{[}\mathcal{W}_{l}\cdot AGG\big{(}\{h_{u}^{l-1},\, \forall u\in\eta(v)\}\big{)},\mathcal{B}_{l}h_{v}^{l-1}\big{)}\bigg{)},\]
where AGG denotes the aggregation function. Other GNN models used in this paper, e.g., FastGCN [20], ClusterGCN [21], GraphSaint [24], GAT [23], introduce graph convolution layer, graph sampling, attention mechanism, and alternative techniques to refine the simple neighborhood aggregation rule for more generalized and advanced graph learning.
## IV Systematic Creation of GNN Ensemble
In this section, we will present the proposed GNN-Ensemble method. We will first introduce the substructure sampling and subfeature sampling, followed by a discussion on the model diversity. Then, we will define the discriminant function and discuss the decision aggregation rules. Finally, we will analyze the time and space complexity for GNN-Ensemble.
### _GNN Ensemble_
**Substructure and Subfeature Sampling.** The underlying idea behind GNN-Ensemble is to first generate many base models and then combine these base models to form a new stochastic model. The key point is that the "projectability" of the combined stochastic model will be comparable to that of the base models from which it is built, where the projectability is defined as 1 minus the difference in performance between training and test (the larger the projectability, the more projectable the model) [44]. Our method to create multiple GNN base models is to construct GNNs in randomly selected subspaces of the feature space as well as randomly selected subgraphs of the topological space. For a given topological space with \(n\) number of nodes, \(m\) number of edges, and feature space of \(d\) dimensions, there are in total \(2^{(n+m+d)}\) subspaces in which a base GNN can be constructed. We use randomization to introduce differences into classifiers, i.e., randomly select the substructures and subfeatures.
The base GNN model is trained in each selected substructure and subfeature space. Each of these base GNN models classifies the training data with high accuracy, aiming to discern between the inputs of different classes while not discerning between training and test data of the same class [44]. Consequently, the classification is invariant for points that are different from the training points only in the unselected dimensions [16]. Thus each base GNN generalizes its classification in a different way. The vast number of substructures and subspaces provides more choices than can be used in practice. The performance on the training set of the stochastic model can be made arbitrarily good if enough base models are used.
With substructure and subfeature sampling, we generate different random subgraphs out of the original graph. Such randomness and diversity of the input graphs can serve as a data augmentation approach. In addition to the understanding of ensemble from random forests, we provide another intuitive understanding of GNN-Ensemble from the perspective of graph learning. The key in the GNN model is to aggregate neighbors' information for each node as a weighted sum of the neighbor features (the weights are associated with the edges). Those approaches used for neighborhood sampling in GNNs, such as GraphSage and ClusterGCN, enable a random subset aggregation instead of the full aggregation during GNN training. In GNN-Ensemble, when GraphSage and ClusterGCN are used as the backbone models, the neighbor aggregation is more sparse. Therefore, the trained GNN model is less likely to remember random error patterns encoded in the training data.
**Model Diversity.** The randomness in the selected substructure and subfeature enables the initialization of training algorithms with different input data, and therefore eventually yields different classifiers. Similar to the performance of decision trees in random forests, the base GNN model can also provide very high accuracy on training data. This may lead to overfitting [45], although much reduced compared to the single GNN model and GNN ensemble with each model trained with full sets of the structure and feature space. The best scenario is when all base GNNs of an ensemble can learn and predict with uncorrelated errors. Then a simple averaging method can effectively reduce the average error of a base model. However, with a sufficiently large pool of base GNN models in the ensemble, it is unrealistic to assume that the errors from these individual base learners are completely uncorrelated. The worst scenario represents another end of the spectrum: the errors are duplicated for all base GNN models, which we aim to avoid. While our focus is on overfitting reduction with substructure and subfeature sampling, model diversity by exploring different model architectures and post-processing diversity improvement [13] for GNNs can be considered in future work.
Our approach is fundamentally different from the existing GNN ensemble literature [33, 34] and implementations2. These methods train multiple GNN models on the full set of the graph and feature space and perform ensemble learning based on model stacking. In contrast, our method trains the base GNN model on the subgraphs which are sampled from the power set of the graph structure and node feature space. **Figure 2** illustrates an overview of GNN-Ensemble during training and at inference for the node classification task. During the training pipeline, we first perform substructure sampling and subfeature sampling. Then we train multiple GNN models in each sampled substructure and subfeature space. During the test phase, a test node is submitted to GNN-Ensemble and is evaluated with each base model. Finally, we aggregate these independent predictions and make the final decision on the nodclass label of the test node. Our method is especially suitable for parallel implementation of both the
generation and evaluation of base models in the ensemble.
### _The Discriminant Function_
Similar to the relation of random forests and decision tree [16], the theoretical foundation of the discriminant power for GNN-Ensemble is built on the theory of stochastic modeling.
For a given node \(x\), let \(g_{j}(x)\) be the GNN trained on the \(j\)th sampled subgraph \(G_{j}\)\((j=1,2,...,k)\). Let the posterior probability that \(x\) is predicted as class \(c\)\((c=1,2,...,s)\) be denoted as \(P(c|g_{j}(x))\).
\[P(c|g_{j}(x))=\frac{P(c,g_{j}(x))}{\sum_{c=1}^{s}P(c,g_{j}(x))}\]
can be estimated by the fraction of class \(c\) nodes over all nodes that are assigned to the decision region of \(g_{j}(x)\).
For an unseen node for prediction, our method averages over the posterior probabilities \(P(c|g_{j}(x))\) that are conditioned on each of the independently trained GNN models. Accordingly, the discriminant function is defined as:
\[g_{j}(x)=\frac{1}{k}\sum\nolimits_{j=1}^{k}P(c|g_{j}(x)).\]
The decision rule is to assign \(x\) to class \(c\) for which \(g_{j}(x)\) is the maximum. Geometrically, each GNN model defines a neighborhood around the decision region of that node in the chosen subfeature space and substructure space.
By averaging over the posterior probabilities in these neighborhoods (decision regions), the discriminant function approximates the posterior probability for a given input in the original structure and feature space. The averaging of GNNs lowers the overall variance and prediction error. Similar to random forests [12], according to the Law of Large Numbers [44], the variance of generalization error is decreasing to zero in GNN-Ensemble when more GNNs are added to the ensemble. Therefore, GNN-Ensemble is an effective method to overcome the overfitting problem.
### _Decision Aggregation_
GNN-Ensemble makes the final decision by combining individual predictions from all the base GNN models via a committee consensus method. With the decisions made by \(K\) GNN models separately trained on different substructures of the graph and subfeatures of the node features, these decisions are aggregated using the voting method. The setting of three common voting methods is listed below.
* Hard voting: the final decision is determined by the majority vote of the prediction results from all decision-makers. If two classes have the same number of votes, we will compare their average prediction confidence and output the higher one.
* Soft voting: the final decision is determined by the highest prediction confidence in all classes by averaging the precision vector of all decision-makers. Unlike hard voting which is based on the decision made by the decision maker, soft voting averages the confidence (prediction probability vector) of the decision makers for decision making.
* Weighted voting: the final decision is determined by the weighted combination of decisions. For hard voting, this can be done by assigning weight to the single model based on its past accuracy while equal weights are assigned to all single base GNN models at the beginning. For soft voting, the weight can be given in proportion to the prediction confidence.
Given different decision aggregation approaches, we observe that no single voting method can consistently outperform the others. Specifically, we find that hard voting and soft voting only have a slight difference: \(\pm 0.2\%\). Therefore, we only report the results based on soft voting for brevity.
Fig. 2: An overview of GNN-Ensemble for node classification. For the transductive setting, the input training graph and input test graph form the full graph.
### _Complexity Analysis_
We analyze and compare the time and space complexity of GNN-Ensemble with different GNN models as the backbone, based on the complexity analysis from [21]. For better readability, we provide **Table Ia** to list all the notations in the complexity analysis. With GraphSage [4], fastGCN [20], and ClusterGCN [21] as examples, **Table Ib** shows the complexity analysis. We make the following two observations:
1. While we have \(k\) base GNN models in the ensemble and the time complexity is multiplied by a \(k\) factor, GNN-Ensemble can run training and evaluation in parallel.
2. By simply stacking multiple GNN models, GNN-Ensemble will introduce additional memory overhead compared to the single model setting, just like all other ensemble models. However, with a smaller \(\alpha\) subgraph percentage and \(\beta\) subfeature percentage, the overall time and space complexity can be significantly reduced.
## V Experiment
In this section, we will discuss and analyze the performance of the proposed GNN-Ensemble method. We conduct thorough experiments to validate the superiority of our method from three aspects: (i) improvement of the prediction accuracy; (ii) alleviation of overfitting and (iii) robustness against adversarial attacks. Before diving into experimental results and analysis, we start with the experimental setup.
### _Experiment Setup_
In this section, we briefly describe the experimental setup, including datasets, evaluation metrics, and baseline methods.
**Dataset.** We evaluate the proposed GNN-Ensemble method on four benchmark graph datasets: two of a small scale, i.e., Cora and PubMed, and two of a larger scale, i.e., PPI and Reddit. Cora and PubMed are citation datasets. Protein-protein interaction (PPI) dataset is a multi-class classification dataset and Reddit is a large social network dataset. Detailed information of these datasets are shown in **Table II**. For the split of training and test data, we follow the original setting in [7]. Cora and PubMed datasets are used for the transductive learning while PPI and Reddit datasets are used for the inductive learning. For all four datasets, we perform the node classification task.
**Evaluation metrics.** Due to the imbalance of classes in the datasets, we use micro-f1 instead of accuracy for measurement. F1 score is the harmonic mean of precision and recall:
\[F1=\frac{2*\text{precision}*\text{recall}}{\text{precision}+\text{recall}}.\]
Precision is the ratio \(\frac{tp}{(tp+fp)}\) of correctly identified positive samples in all predicted positive samples, where \(tp\) is the number of true positives and \(fp\) is the number of false positives. Recall is the ratio \(\frac{tp}{(tp+fn)}\) of correctly identified positive samples in all observed positive instances, where \(fn\) is the number of false negatives.
**Baseline methods.** We use GraphSage [4] as the primary backbone in the evaluation and plug into our method FastGCN [20], ClusterGCN [21], GraphSAINT [24, 46], and GAT [23] as a demonstration of universal add-on. We take these models from the PyTorch Geometric package3. and use their default hyperparameter settings in the package. In particular, for GraphSage, the model has three layers with a hidden dimension of 256.
Footnote 3: [https://github.com/pyg-team/pytorch_geometric](https://github.com/pyg-team/pytorch_geometric)
### _Accuracy Improvement_
In this set of experiments, we first show the accuracy improvement of GNN-Ensemble with subfeature sampling on the full graph structure (i.e., no substructure sampling). GraphSage is used as the base GNN model. We then extend the GNN-Ensemble method with the best parameter setting to FastGCN, ClusterGCN, GraphSAINT, and GAT. We consider a single GraphSage model trained with the full graph and all node features as the baseline. We randomly sample 10%, 30%, 50%, and 70% of the original node features, as well as keep the full set of original features (i.e., 100%). Meanwhile, we build a massive amount of GNN base models for the ensemble. Empirically, we select the number of base models as 100. **Table III** shows the result with node feature sampling on the full graph structure. We make four observations below:
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Cora & PubMed & PPI & Reddit \\ \hline \# nodes & 2708 & 19717 & 56928 & 232965 \\ \hline \# edges & 5429 & 44338 & 1587264 & 114615892 \\ \hline \# features & 1433 & 500 & 50 & 602 \\ \hline \# classes & 7 & 3 & 121, multiple & 41 \\ \hline \# training & 140 & 60 & 44809 & 153431 \\ \hline \# testing & 1000 & 1000 & 5525 & 55703 \\ \hline \multicolumn{2}{|c|}{type} & \multicolumn{2}{c|}{transductive} & \multicolumn{2}{c|}{inductive} \\ \hline \end{tabular}
\end{table} TABLE II: Benchmark datasets and settings.
\begin{table}
\begin{tabular}{|c|c|} \hline \(l\) & \# layers in GNN \\ \hline \(k\) & \# base models in ensemble learning \\ \hline \(n\) & \# nodes in the graph \\ \hline \(m\) & \# edges in the graph \\ \hline \(\alpha\) & \% subgraph \\ \hline \(d\) & \# features \\ \hline \(\beta\) & \% subfeature \\ \hline \(b\) & batch size \\ \hline \(r\) & \# sampled neighbors \\ \hline \end{tabular}
\end{table} TABLE I: Time and Space complexity analysis. \(\alpha^{*}\) denotes the percentage of edges preserved due to the subgraph sampling.
* When the sampling percentage of node features is 100% (i.e., using the full set of original features and full graph structure), GNN-Ensemble stacks 100 GraphSage models. Its classification accuracy on the Cora dataset is 0.797, while the accuracy of a single GraphSage model is 0.789. Simply stacking 100 GNN models can already improve the performance and the randomness introduced by the node subfeature sampling can further boost the improvement.
* GNN-Ensemble improves the performance when there is rich node feature information in the dataset. Specifically, GNN-Ensemble achieves about more than 1% improvement on Cora, which has 1433 node features, and 0.4% for PubMed with 500 node features.
* Although GNNs do not directly act on the hyperplane of each node feature like the decision trees in random forests, GNN-Ensemble can benefit from sampling node features. Empirically, the boost of accuracy is maximized when the subfeature sampling rate falls into a certain range, i.e., 30% \(\sim\) 70% of total features in the measurement.
* Even with as small as 10% of the total features, the resulting GNN-Ensemble model can be as nearly accurate as the model with full features.
Then we look at the combined impact of the substructure sampling and subfeature sampling. **Table IV** shows the results of different percentages of subfeatures with 30% and 70% sampled graph structures when training a base GNN model in GNN-Ensemble. That is, we keep only 30% and 70% of the nodes in the full graph and their corresponding edges, and vary the sampling percentage of node features at the same time. We make three observations:
* Compared to Table III, when 70% of the graph nodes are randomly sampled, for the Cora dataset, GNN-Ensemble consistently outperforms GNN-Ensemble with only subfeature sampling. GNN-Ensemble with both substructure and subfeature sampling is able to improve the performance over the single model baseline when we maintain a certain percentage of the graph structure for each model.
* The size of the subgraph will affect the overall performance even with full node features. Empirically, we find that the subgraph size of 70% can be a reasonable setting for preserving the needed structural information. Only keeping 30% of the nodes would remove too much graph information and therefore lower the f1 score.
* Another merit of GNN-Ensemble is privacy protection. Suppose each party holds a subgraph (with subfeatures), without access to the full graph due to privacy restrictions, we can apply GNN-Ensemble to train a GNN at each party and then ensemble them for final predictions. This is similar to the vertical federated learning setup [47].
Next, we investigate the generalization of GNN-Ensemble using other types of GNN models as backbones. Based on the aforementioned results, we use the best setting of 70% of sampled graph structures and 50% of sampled node features to train the GNN base model in the ensemble. In practice, the sampling percentages of structure and feature are hyper-parameters and can be optimized through cross-validation. **Table V** shows the result. We make three observations.
* GNN-Ensemble is not limited to GraphSage and can be applied to many other types of GNN models, such as FastGCN, ClusterGCN, GraphSaint, and GAT. It can be further extended beyond those GNN models provided in the PyTorch Geometric package and can serve as a universal add-on.
* GNN-Ensemble is not limited to the transductive setting, such as classifications on Cora and PubMed. It can be applied to the inductive setting as well, such as classifications on PPI and Reddit. In our experiment, GNN-Ensemble is able to improve the f1 score 1% \(\sim\) 2% under the transductive setting and around 1% under the inductive setting.
* While the baseline model may converge to different local optimums, all types of sub-optimal results can be improved with GNN-Ensemble. For example, although the f1 score of GraphSage on PPI is 0.598, GNN-Ensemble improves its performance to 0.614. The GAT on PPI is well converged to 0.988, but GNN-Ensemble can still increase its f1 score to 0.993.
of test data. While in the reversed setup, we reverse the training and test data, that is, 55,703 training nodes and 153,431 test nodes, so that the training data becomes much smaller. In addition, we purposely increase the number of embeddings in the hidden layer of GraphSage from 256 to 2048, in order to make a heavily parameterized GNN model. As a result, in the reversed setup, overfitting becomes very evident since there are many more training parameters in the GNN model with much less training data. **Table VI** shows the result and we make two observations:
* GNN-Ensemble is able to reduce overfitting. Specifically, the difference between training and test performance of GNN-Ensemble is smaller than that of the single model baseline. Such observation holds for both vanilla and reversed setups as well as for both 256 and 2048 dimensional embedding. The performance of GNN-Ensemble on test data is consistently better than that of the baseline.
* GNN-Ensemble can reduce overfitting more effectively when the complexity of the model is higher. Under the reversed setting, when the dimension of embedding is 2048, GNN-Ensemble increases the f1 score by 2% in the test dataset.
### _Adversarial Robustness_
At last, we measure the adversarial robustness of GNN-Ensemble. We conduct the experiment using DeepRobust4 and consider four graph evasion attacks [42]: RL-S2V [38], PGD [39], GF-Attack [40], and IG-FGSM [41]. All four attacks perturb the input graph by adding and deleting edges. IG-FGSM, in addition, modifies the features as well. We set the maximum perturbation threshold to 10%, that is, allow the attacker to add or delete at most 10% of the edges. To satisfy the imperceptibility requirement of adversarial attacks, the attacker can only modify such a limited percentage of nodes or edges. We compare GNN-Ensemble with three representative adversarial defense approaches: GCN-SVD [48], GCN-Jaccard [41] and ProGNN [49]. These defense approaches are applied in their default settings in DeepRobust. This set of experiments are conducted on Cora and PubMed under the transductive setting. **Table VII** shows the adversarial robustness results. We make three observations:
Footnote 4: [https://github.com/DSE-MSU/DeepRobust](https://github.com/DSE-MSU/DeepRobust)
* For all four adversarial attacks on the graph, GNN-Ensemble consistently and significantly outperforms the four defense methods on both datasets. Since the attacker is limited to modifying at most 10% of the edges, the majority of the nodes and edges remain unchanged during the adversarial attack. Since GNN-Ensemble explores the structural information of the graph and uses multiple subgraphs of the original graph for learning and prediction, subgraphs containing the victim node will be less likely to be impacted by the maliciously modified edges. Therefore, GNN-Ensemble can achieve better classification performance on the victim nodes in an adversarially sabotaged graph.
* GNN-Ensemble is very effective in mitigating the adversarial effect of IG-FGSM, where attacks occur by adding and deleting edges as well as modifying features. GNN-Ensemble trains the base GNN models on sampled substructures and subfeatures of the original graph. The adversarial effect is much less eminent in the sampled subspaces. Therefore, GNN-Ensemble is very robust to adversarial attacks.
* Due to the different optimization goals under every possible combination of substructure and subfeatures, it now becomes very difficult to make adversarial examples via
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{Cora} & \multicolumn{2}{c|}{PubMed} & \multicolumn{2}{c|}{PPI} & \multicolumn{2}{c|}{Reddit} \\ \cline{3-8} & baseline & GNN-Ensemble & baseline & GNN-Ensemble & baseline & GNN-Ensemble & baseline & GNN-Ensemble \\ \hline Fast-GCN & 0.674 & **0.692** & 0.718 & **0.735** & cannot converge & 0.937 & **0.951** \\ \hline Cluster-GCN & 0.812 & **0.831** & 0.768 & **0.774** & 0.982 & **0.989** & 0.954 & **0.967** \\ \hline GraphSage & 0.789 & **0.805** & 0.781 & **0.785** & 0.598 & **0.614** & 0.953 & **0.965** \\ \hline GraphSaint & 0.787 & **0.806** & 0.751 & **0.764** & 0.98 & **0.989** & 0.964 & **0.968** \\ \hline GAT & 0.832 & **0.849** & 0.779 & **0.786** & 0.988 & **0.993** & 0.967 & **0.979** \\ \hline \end{tabular}
\end{table} TABLE V: GNN-Ensemble with different types of GNN models as the base model: aggregate 100 base models, each of which is trained with 70% subgraph and 50% subfeatures.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{1}{c|}{benign} & RL-S2V & PGD & GF-attack & IG-FGSM \\ \hline \multirow{4}{*}{\begin{tabular}{c} vanilla \\ \end{tabular} } & baseline & 0.789 & 0.712 & 0.72 & 0.726 & 0.025 \\ \cline{2-6} & GCN-SVD & 0.756 & 0.683 & 0.691 & 0.678 & 0.566 \\ \cline{2-6} & GCN-Jaccard & 0.773 & 0.741 & 0.745 & 0.737 & 0.736 \\ \cline{2-6} & ProGNN & 0.782 & 0.745 & 0.738 & 0.741 & 0.739 \\ \cline{2-6} & GNN-Ensemble & **0.805** & **0.769** & **0.773** & **0.761** & **0.775** \\ \hline \multirow{4}{*}{
\begin{tabular}{c} reverse \\ \end{tabular} } & baseline & 0.781 & 0.709 & 0.715 & 0.714 & 0.011 \\ \cline{2-6} & GCN-SVD & 0.738 & 0.688 & 0.684 & 0.689 & 0.557 \\ \cline{2-6} & GCN-Jaccard & 0.774 & 0.745 & **0.746** & 0.743 & 0.725 \\ \cline{2-6} & ProGNN & 0.776 & 0.723 & 0.730 & 0.731 & 0.754 \\ \cline{2-6} & GNN-Ensemble & **0.785** & **0.751** & 0.744 & **0.748** & **0.759** \\ \hline \end{tabular}
\end{table} TABLE VII: Adversarial robustness measurement: GNN-Ensemble aggregates 100 base models, each of which is trained with 70% subgraph and 50% subfeature.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\# embedding} & \multicolumn{2}{c|}{baseline} & \multicolumn{2}{c|}{GNN-Ensemble} \\ \cline{2-6} & training & testing & training & testing \\ \hline \multirow{2}{*}{vanilla} & 256 & 0.9766 & 0.953 & 0.9648 & 0.965 \\ \cline{2-6} & 2048 & 0.9789 & 0.954 & 0.9781 & 0.970 \\ \hline \multirow{2}{*}{reverse} & 256 & 0.9993 & 0.95 & 0.9987 & 0.966 \\ \cline{2-6} & 2048 & 0.9999 & 0.951 & **0.9999** & **0.973** \\ \hline \end{tabular}
\end{table} TABLE VI: Overfitting reduction measurement on original and reversed Reddit dataset: GraphSage is used with two settings of hidden parameters: 256 and 2048. GNN-Ensemble aggregates 100 base models, each of which is trained with 70% subgraph and 50% subfeature.
optimizing the graph in favor of the attackers. Therefore, different base models in GNN-Ensemble can hardly be jointly optimized to perform the cross-model attack [50].
## VI Conclusion
We propose a new graph learning method called GNN-Ensemble to construct an ensemble of random decision graph neural networks. GNN-Ensemble consists of multiple base models that are trained with randomly selected substructures in the topological space and subfeatures in the node feature space. The discriminant function of GNN-Ensemble approximates the posterior probability for a given input in the original topological and feature space. GNN-Ensemble is easily parallelized, where each base model can be trained and perform inference independently. In addition, different types of GNN models can be used as the base model in the ensemble. Extensive experimental results on four real-world benchmark graph datasets show that GNN-Ensemble consistently outperforms all the baselines on various node classification tasks. Similar to random forests, GNN-Ensemble is also very effective in overcoming the overfitting problem. Last but not the least, GNN-Ensemble significantly improves the adversarial robustness against attacks on single GNN model.
|
2301.12006 | Improved knowledge distillation by utilizing backward pass knowledge in
neural networks | Knowledge distillation (KD) is one of the prominent techniques for model
compression. In this method, the knowledge of a large network (teacher) is
distilled into a model (student) with usually significantly fewer parameters.
KD tries to better-match the output of the student model to that of the teacher
model based on the knowledge extracts from the forward pass of the teacher
network. Although conventional KD is effective for matching the two networks
over the given data points, there is no guarantee that these models would match
in other areas for which we do not have enough training samples. In this work,
we address that problem by generating new auxiliary training samples based on
extracting knowledge from the backward pass of the teacher in the areas where
the student diverges greatly from the teacher. We compute the difference
between the teacher and the student and generate new data samples that maximize
the divergence. This is done by perturbing data samples in the direction of the
gradient of the difference between the student and the teacher. Augmenting the
training set by adding this auxiliary improves the performance of KD
significantly and leads to a closer match between the student and the teacher.
Using this approach, when data samples come from a discrete domain, such as
applications of natural language processing (NLP) and language understanding,
is not trivial. However, we show how this technique can be used successfully in
such applications. We evaluated the performance of our method on various tasks
in computer vision and NLP domains and got promising results. | Aref Jafari, Mehdi Rezagholizadeh, Ali Ghodsi | 2023-01-27T22:07:38Z | http://arxiv.org/abs/2301.12006v1 | # Improved knowledge distillation by utilizing backward pass knowledge in neural networks
###### Abstract
Knowledge distillation (KD) is one of the prominent techniques for model compression. In this method, the knowledge of a large network (teacher) is distilled into a model (student) with usually significantly fewer parameters. KD tries to better-match the output of the student model to that of the teacher model based on the knowledge extracts from the forward pass of the teacher network. Although conventional KD is effective for matching the two networks over the given data points, there is no guarantee that these models would match in other areas for which we do not have enough training samples. In this work, we address that problem by generating new auxiliary training samples based on extracting knowledge from the backward pass of the teacher in the areas where the student diverges greatly from the teacher. We compute the difference between the teacher and the student and generate new data samples that maximize the divergence. This is done by perturbing data samples in the direction of the gradient of the difference between the student and the teacher. Augmenting the training set by adding this auxiliary improves the performance of KD significantly and leads to a closer match between the student and the teacher. Using this approach, when data samples come from a discrete domain, such as applications of natural language processing (NLP) and language understanding, is not trivial. However, we show how this technique can be used successfully in such applications. We evaluated the performance of our method on various tasks in computer vision and NLP domains and got promising results.
## 1 Introduction
During the last few years, we faced the emerge of a huge number of cumbersome state-of-the-art deep neural network models in different fields of machine learning, including computer vision [27; 10], natural language processing [16; 12; 13; 3] and speech processing [1; 7]. We need powerful servers to be able to deploy such large models. Running such large models on edge devices would be infeasible due to the limited memory and computational power of edge devices [22]. On the other hand, considering users' privacy concerns, network reliability issues, and network delays increase the demand for offline machine learning solutions on edge devices. The field of neural model compression focuses on providing compression solutions such as quantization [11], pruning [26], tensor decomposition [24] and knowledge distillation (KD) [9] for large neural networks. |
2308.08227 | Inherent Redundancy in Spiking Neural Networks | Spiking Neural Networks (SNNs) are well known as a promising energy-efficient
alternative to conventional artificial neural networks. Subject to the
preconceived impression that SNNs are sparse firing, the analysis and
optimization of inherent redundancy in SNNs have been largely overlooked, thus
the potential advantages of spike-based neuromorphic computing in accuracy and
energy efficiency are interfered. In this work, we pose and focus on three key
questions regarding the inherent redundancy in SNNs. We argue that the
redundancy is induced by the spatio-temporal invariance of SNNs, which enhances
the efficiency of parameter utilization but also invites lots of noise spikes.
Further, we analyze the effect of spatio-temporal invariance on the
spatio-temporal dynamics and spike firing of SNNs. Then, motivated by these
analyses, we propose an Advance Spatial Attention (ASA) module to harness SNNs'
redundancy, which can adaptively optimize their membrane potential distribution
by a pair of individual spatial attention sub-modules. In this way, noise spike
features are accurately regulated. Experimental results demonstrate that the
proposed method can significantly drop the spike firing with better performance
than state-of-the-art SNN baselines. Our code is available in
\url{https://github.com/BICLab/ASA-SNN}. | Man Yao, Jiakui Hu, Guangshe Zhao, Yaoyuan Wang, Ziyang Zhang, Bo Xu, Guoqi Li | 2023-08-16T08:58:25Z | http://arxiv.org/abs/2308.08227v1 | # Inherent Redundancy in Spiking Neural Networks
###### Abstract
Spiking Neural Networks (SNNs) are well known as a promising energy-efficient alternative to conventional artificial neural networks. Subject to the preconceived impression that SNNs are sparse firing, the analysis and optimization of inherent redundancy in SNNs have been largely overlooked, thus the potential advantages of spike-based neuromorphic computing in accuracy and energy efficiency are interfered. In this work, we pose and focus on three key questions regarding the inherent redundancy in SNNs. We argue that the redundancy is induced by the spatio-temporal invariance of SNNs, which enhances the efficiency of parameter utilization but also invites lots of noise spikes. Further, we analyze the effect of spatio-temporal invariance on the spatio-temporal dynamics and spike firing of SNNs. Then, motivated by these analyses, we propose an Advance Spatial Attention (ASA) module to harness SNNs' redundancy, which can adaptively optimize their membrane potential distribution by a pair of individual spatial attention sub-modules. In this way, noise spike features are accurately regulated. Experimental results demonstrate that the proposed method can significantly drop the spike firing with better performance than state-of-the-art SNN baselines. Our code is available in [https://github.com/BICLab/ASA-SNN](https://github.com/BICLab/ASA-SNN).
## 1 Introduction
By mimicking the spatio-temporal dynamics behaviors of biological neurons, Spiking Neural Networks (SNNs) pose a paradigm shift in information encoding and transmitting [36, 37]. Spiking neurons only fire when the membrane potential is greater than the threshold (Figure 1**a**), in theory, these complex internal dynamics make the representation ability to spiking neurons more powerful than existing artificial neurons [31]. Moreover, spike-based binary communication (0/1 spike) enables SNNs to be _event-driven_ when deployed on neuromorphic chips [3, 34], i.e., performing cheap synaptic Accumulation (AC) and bypassing computations on zero inputs or activations [6, 4].
For a long time, when referring to spike-based neuromorphic computing, people naturally believe that its computation is sparse due to the event-driven feature. Subject to this preconceived impression, although it is generally agreed that sparse spike firing is the key to achieving high energy efficiency in neuromorphic computing, there is a lack of systematic and in-depth analysis of redundancy in SNNs. Existing explorations are limited to specific methods of dropping spike counts. For instance, several algorithms have been proposed to exploit spike-aware sparsity regularization and compression by adding a penalty function[5, 52, 54, 51, 33, 25], designing network structures with fewer spikes using neural architecture search techniques [32, 24], or developing data-dependent models to regulate spike firing based on the input data [48, 50]. Generally, employing these methods to reduce spikes incurs a loss of accuracy or significant additional computation.
In this work, we provide a novel perspective to understand the redundancy of SNNs by analyzing the relationship between _spike firing_ and _spatio-temporal dynamics_ of spiking neurons. This analysis could be extended by asking three key questions. (i) **Which** spikes are redundant? (ii) **Why** is there redundancy in SNNs? (iii) **How** to efficiently drop the redundant spikes?
To perfectly demonstrate redundancy in SNNs, we select event-based vision tasks to observe spike responses. Event-based cameras, such as the Dynamic Vision Sensor (DVS) [27], are a novel class of bio-inspired vision sensors that only encode the vision scene's brightness change information into a stream of events (spike with address information)
for each pixel. As shown in Figure 1**b**, the red and green dots represent pixels that increase and decrease in brightness, respectively, and the gray areas without events indicate no change in brightness. However, although the information given in the input is human gait without background, some spike features extracted by the vanilla SNN focus on background information. As depicted in Figure 1**c**, the spiking neurons in the noise feature map fire a large number of spikes in the background region, which are redundant.
Unfortunately, noise features exist widely in both temporal and spatial dimensions, but exhibit some interesting regularities. We argue that the underlying reason for this phenomenon is due to a fundamental assumption of SNNs, known as _spatio-temporal invariance[22]_, which enables sharing weights for every location across all timesteps. This assumption improves the parameter utilization efficiency while boosting the redundancy of SNNs. Specifically, by controlling the input time window of event streams, we can clearly observe the temporal and spatial changes of the spike features extracted by the SNN (see Figure 2). In the spatial dimension, there are many similar noise features, which can be referred to as ghost features [15, 16]. In the temporal dimension, although the information extracted by SNN changes at different timesteps, the spatial position of the noise spike feature is almost the same.
Recently, several works [14, 13, 12] have investigated the information loss caused by SNNs when quantizing continuous membrane potential values into discrete spikes. Inspired by these works, we transformed our problem "the relationship between spike firing and spatio-temporal dynamics of spiking neurons" to investigate the relationship between membrane potential distribution and redundant spikes. Motivated by the observations that redundancy is highly correlated with spike feature patterns and neuron location, we present the Advanced Spatial Attention (ASA) module for SNNs, which can convert noise features into normal or null (without spike firing) features by shifting the membrane potential distribution. We conduct extensive experiments using a variety of network structures to verify the superiority of our method on five event-based datasets. Experimental results show that the ASA module can help SNN reduce spikes and improve task performance concurrently. For instance, on the DVS128 Gait-day dataset[41], at the cost of negligible additional parameters and computations, the proposed ASA module decreases the baseline model's spike counts by 78.9% and increases accuracy by +5.0%. We summarize our contributions as follows:
1. We provide the first systematic and in-depth analysis of the inherent redundancy in SNNs by asking and answering three key questions, which are crucial to the high energy efficiency of spike-based neuromorphic computing but have long been neglected.
2. For the first time, we relate the redundancy of SNNs to the distribution of membrane potential, and design a simple yet efficient advanced spatial attention to help SNN optimize the membrane potential distribution and thus reduce redundancy.
3. Extensive experimental results show that the proposed ASA module can improve SNNs' performance and significantly drop noise spikes concurrently. This inspires us that two of the most important nature of spike-based neuromorphic computing, bio-inspired spatio-temporal dynamics and event-driven sparse computing, can be naturally incorporated to achieve better performance with lower energy consumption.
Figure 1: (a) Spatio-temporal dynamics of spiking neurons with binary spike input and output, synaptic weight \(w\), membrane potential \(U(t)\), threshold \(V_{th}\) and hard reset membrane potential \(V_{reset}\). (b) An example of an event stream. (c) Examples of changes in the spike responses of vanilla SNN and ASA-SNN, in terms of Membrane Potential Distribution (MPD) and spike feature. Each pixel value on the spike feature represents the firing rate of a neuron. Noise spike feature fires lots of spikes while concentrating on insignificant background information (large area red). The ASA module can shift the spike pattern of SNNs to drop spike counts by optimizing the MPD.
## 2 Related work
**Event-based vision and spike-based neuromorphic computing.** Due to the unique advantages of high temporal resolution, high dynamic range, etc., DVS has broad application prospects in special visual scenarios, such as high-speed object tracking [55], low-latency interaction [1], etc. Event-based vision is one of the typical advantage application scenarios of SNNs, which can process event streams event-by-event to achieve minimum latency [10], and can be smoothly deployed on neuromorphic chips to realize ultra-low energy cost by spike-based event-driven sparse computing [37, 35, 36, 6]. As an example, a recent edge computing device called Speck1 integrates an SNN-enabled asynchronous neuromorphic chip with a DVS camera [10]. Its peak power is mW level, and latency is ms level. In this work, we investigate the SNNs' inherent redundancy by using a variety of event-based datasets to further explore their enticing potential for accuracy and energy efficiency.
Footnote 1: [https://www.synsense-neuromorphic.com/products/speck/](https://www.synsense-neuromorphic.com/products/speck/)
**Attention in SNNs.** Attention methods were included in deep learning with tremendous success and were motivated by the fact that humans can focus on salient vision information in complicated scenes easily and efficiently. A popular research direction is to present attention as an auxiliary module to boost the representation capacity of ANNs [20, 44, 47, 26, 11]. In line with this idea, Yao _et al._[48] first suggested using an extra plug-and-play temporal-wise attention module for SNNs to bypass a few unnecessary input timesteps. Subsequently, a number of works were given to utilize multi-dimensional attention modules for SNNs, including temporal-wise, spatial-wise, or channel-wise simultaneously [30, 58, 53, 50, 49], where Yao _et al._[50] highlighted that attention could aid SNNs in reducing spike firing while enhancing accuracy. However, to produce attention scores and refine membrane potentials, multi-dimensional attention inevitably adds a lot of extra computational burden to SNNs. In this work, we exclusively employ spatial attention, which is inspired by the investigation of the redundancy of SNNs.
**Membrane Potential Distribution in SNNs.** Rectifying MPD is crucial for SNN training because SNNs are more vulnerable to gradient vanishing or explosion since spikes are discontinuous and non-differentiable. Around this point, researchers have made many advances in SNN training, such as normalization techniques [56, 46], shortcut design [21, 7], extension with more learnable parameter [8, 38], distribution loss design [14, 13], etc. We here, in contrast to prior publications, concentrate on the connection between MPD and redundancy, a topic that is typically disregarded in the SNN community.
## 3 SNN Redundancy Analysis
### SNN Fundamentals
The basic computational unit of a SNN is the spiking neuron, which is the abstract modeling of the dynamics mechanism of biological neuron [18]. The Leaky Integrate-and-Fire (LIF) model [31] is one of the most commonly used spiking neuron models since it establishes a good balance between the simplified mathematical form and the complex dynamics of biological neurons. We describe the LIF-SNN layer in its iterative representation version form [45]. First, the LIF layer will perform the following integration operations,
\[\mathbf{U}^{t,n}=\mathbf{H}^{t-1,n}+\mathbf{X}^{t,n}, \tag{1}\]
where \(n\in\{1,\cdots,N\}\) and \(t\in\{1,\cdots,T\}\) denote the layer and timestep, \(\mathbf{U}^{t,n}\) means the membrane potential which is produced by coupling the spatial feature \(\mathbf{X}^{t,n}\) and the temporal information \(\mathbf{H}^{t-1,n}\), and \(\mathbf{X}^{t,n}\) can be done by convolution operations,
\[\mathbf{X}^{t,n}=\mathrm{BN}\left(\mathrm{Conv}\left(\mathbf{W}^{n},\mathbf{S}^{t,n-1} \right)\right), \tag{2}\]
where \(\mathrm{BN}(\cdot)\) and \(\mathrm{Conv}(\cdot)\) mean the batch normalization[23] and convolution operation respectively, \(\mathbf{W}^{n}\) is the weight matrix, \(\mathbf{S}^{t,n-1}(n\neq 1)\) is a spike tensor from the last layer that only contain 0 and 1, and \(\mathbf{X}^{t,n}\in\mathbb{R}^{c_{n}\times h_{n}\times w_{n}}\). Then, the fire and leaky mechanism inside the spiking neurons are respectively executed as
\[\mathbf{S}^{t,n}=\mathrm{Hea}\left(\mathbf{U}^{t,n}-V_{th}\right), \tag{3}\]
and
\[\mathbf{H}^{t,n}=V_{reset}\mathbf{S}^{t,n}+\left(\beta\mathbf{U}^{t,n}\right)\otimes\left( \mathbf{1}-\mathbf{S}^{t,n}\right), \tag{4}\]
where \(V_{th}\) is the threshold to determine whether the output spike tensor \(\mathbf{S}^{t,n}\) should be spike or stay as zero, \(\mathrm{Hea}(\cdot)\) is a Heaviside step function that satisfies \(\mathrm{Hea}\left(x\right)=1\) when \(x\geq 0\), otherwise \(\mathrm{Hea}\left(x\right)=0\), \(V_{reset}\) denotes the reset potential which is set after activating the output spike, and \(\beta=e^{-\frac{dt}{t}}<1\) reflects the decay factor, \(\tau\) is the membrane time constant, and \(\otimes\) denotes the element-wise multiplication. When the entries in \(\mathbf{U}^{t,n}\) are greater than the threshold \(V_{th}\), the spatial output of spike sequence \(\mathbf{S}^{t,n}\) will be activated (Eq. 3). Meanwhile, the entries in \(\mathbf{U}^{t,n}\) will be reset to \(V_{reset}\), the temporal output \(\mathbf{H}^{t,n}\) should be decided by \(\mathbf{X}^{t,n}\) since \(\mathbf{1}-\mathbf{S}^{t,n}\) must be 0. Otherwise, the decay of \(\mathbf{U}^{t,n}\) will be used to transmit the \(\mathbf{H}^{t,n}\), since the \(\mathbf{S}^{t,n}\) is 0, which means there is no activated spike output (Eq. 4).
### Redundancy Analysis
We first define various terms to appropriately represent redundancy in SNNs, as below.
**Definition 1**.: _Spike Firing Rate (SFR)_: We input all the samples on the test set into the network and count the spike distribution. We define a Neuron's SFR (N-SFR) at the \(t\)-th timestep as the ratio of the number of samples generating spikes on this neuron to the number of all tested samples. Similarly, at the \(t\)-th timestep, we define the SFR of a Channel (C-SFR) or this Timestep (T-SFR) as the average of the SFR values of all neurons in this channel or the entire network at this timestep. We define the Network Average SFR (NASFR) as the average of T-SFR over all timesteps \(T\).
**Definition 2**.: _Spike features._ We input all the samples on the test set into the network and define the average output of a channel at the \(t\)-th timestep as a spike feature, with each pixel's value being N-SFR.
**Definition 3**.: _Ghost features._ There are numerous feature map pairs that resemble one another like ghosts [15, 16]. We call these feature maps ghost features.
**Definition 4**.: _Spike patterns._ Spike features display a variety of patterns, and various patterns extract different types of information. We empirically refer to the features that focus on background information as the noise pattern since there is no background information in the input, and collectively refer to other features as the normal pattern.
Based on these definitions, we investigate the redundancy of SNNs in four granularities: spatial, temporal, channel, and neuron.
**Observation 1**.: _In the spatial granularity, there are lots of ghost features in the spike response._
Redundancy is inevitable in over-parameterized neural networks. For instance, from the perspective of feature maps, there are many ghost features in Conv-based ANNs (CNNs) [15, 57]. The same is true for SNNs, as demonstrated in Figure 2**a**. Plotting all spike features at the same timestep, we see that certain features concentrate on background information with a huge region of red, while others concentrate on gait information with a large area of blue, and ghost features can be seen in both patterns.
**Observation 2**.: _In the temporal granularity, T-SFR at different timesteps does not change much._
Given that each timestep shares the weight of 2D convolution for spatial modeling, the level of redundancy has increased significantly for SNNs that do temporal modeling. To show this, we give spike features of the same channel at various timesteps in Figure 2**b**. We see that for a fixed channel, the features derived at various timesteps differ, i.e., the human gait shifts progressively to the right as the timestep increases. Interestingly, the same channel's spike features--almost all of which is background information or all of which is information on human gait--are essentially the same at different timesteps. This demonstrates that spatio-temporal invariance will result in similar spike features at different timesteps. It also implies that redundancy in SNNs is linearly connected to timesteps.
**Observation 3**.: _In the channel granularity, the C-SFR is closely related to the spike patterns learned by this channel. In the neuron granularity, the N-SFR is tightly linked to the location of neurons._
We zoom in to highlight two typical spike features that pertain to various patterns in Figure 1**c**. We see that the two features have substantially different C-SFRs. The spike feature of the noise pattern fires many spikes while focusing on trivial background information. By contrast, the spike feature of the normal pattern with lower C-SFR focuses on salient gait information in a condensed region. Furthermore, the N-SFRs of neurons in the background region of normal features are almost zero, but the N-SFRs of neurons in the same region in noise features are very high.
**Definition 5**.: _Membrane Potential Distribution (MPD)._ We input all the samples on the test set into the network. In the \(c\)-th channel of the \(n\)-th layer, we count the membrane potential values of all neurons at the \(t\)-th timestep. We can represent the membrane potential distribution of the channel by a 2D histogram, where the horizontal axis is the value of the membrane potential, and the vertical axis is the number of neurons located in a certain window.
**Observation 4**.: _Membrane potential distributions are
Figure 2: Inherent redundancy in SNNs exists in both spatial and temporal granularities, originating from network _over-parameterization_ and _spatio-temporal invariance_, respectively. (a) spike features (averaging the spike tensors \(\mathbf{S}^{t,n}\) over all samples) of different channels at the same timestep. Each pixel indicates the firing rate of a spiking neuron. The bluer the pixel, the closer the firing rate is to 0; the redder the pixel, the closer the firing rate is to 1. In the noise features, the background region is big and nearly totally red, indicating that many spikes are produced. (b) spike features of the same channel at different timesteps.
highly correlated with spike patterns._
Note, to facilitate the analysis of the effect of the proposed method on the MPD and the spike feature, we discuss them in detail later in Section 5.4.
## 4 Methodology
**Motivation.** We can infer the following three empirical conclusions from : (1) Features can basically be separated into noise, and normal patterns; (2) The quality of the feature is determined by the MPD of the channel; (3) The firing of spiking neurons is related to their location. Based on these observations, we concentrate our optimization on the MPD of each channel, i.e., performing spatial attention. Meanwhile, considering that all features have two patterns, we exploit independent spatial attention to optimize them separately, called advanced spatial attention.
**Method.** As shown in Figure 2(a), we implement our ASA module in two steps. We first exploit a channel separation technique to separate all features into two complementary groups based on their importance. Then individual SA sub-modules are performed on the two groups of features. Suppose \(\mathbf{X}^{n}=\left[\cdots,\mathbf{X}^{t,n},\cdots\right]\in\mathbb{R}^{T\times c_{n} \times h_{n}\times w_{n}}\) is an intermediate feature map as input tensor, this two-step process can be summarized by the following equations:
\[\mathbf{M}_{1},\mathbf{M}_{2}=g_{2}(g_{1}(\mathbf{X}^{n})), \tag{5}\]
\[\overline{\mathbf{X}}^{n}=f_{S1}(\mathbf{X}^{n}\otimes\mathbf{M}_{1})\oplus f_{S2}(\mathbf{X} ^{n}\otimes\mathbf{M}_{2}), \tag{6}\]
where \(\mathbf{M}_{1},\mathbf{M}_{2}\in\mathbb{R}^{T\times c_{n}\times 1\times 1}\) are the complementary mask (separation) maps that contain only 0 and 1 elements, \(g_{1}(\cdot)\) is a function that assesses the channel's importance, \(g_{2}(\cdot)\) is the separation policy function used to generate mask maps for feature grouping, \(f_{S1}(\cdot)\) and \(f_{S2}(\cdot)\) are SA functions with the same expression, \(\overline{\mathbf{X}}^{n}\) is the output feature tensor which has the same size as \(\mathbf{X}^{n}\). During multiplication, the mask score are broadcast (copied) along the temporal and channel dimensions accordingly. Finally, compared with \(\mathbf{U}^{t,n}\) of vanilla SNN in Eq. 1, the _new membrane potential_ behaviors of ASA-SNN layer follow
\[\mathbf{U}^{t,n}=\mathbf{H}^{t-1,n}+\overline{\mathbf{X}}^{t,n}. \tag{7}\]
Empirically, the design of \(g_{1}(\cdot)\) is critical to the task accuracy, as well as the number of additional parameters and computations. The classic channel attention models in CNNs [20, 44, 39, 47, 26] generally judge the importance of the channel by fusing the global degree information (max pooling) and local significance information (average pooling) of the features. Inspired by these works, here we design two schemes for \(g_{1}(\cdot)\), one that is learnable (_ASA-1_) and the other that directly judges importance based on pooled information (_ASA-2_).
As shown in Figure 2(b), temporal-channel features are aggregated by using both average-pooling and max-pooling operations, which infer two different tensors \(\mathbf{F}_{avg},\mathbf{F}_{max}\in\mathbb{R}^{T\times c_{n}\times 1\times 1}\).
In ASA-1, we get the importance map \(\mathbf{M}\) by
\[\mathbf{M}^{\prime}=\frac{1}{2}\otimes(\mathbf{F}_{avg}+\mathbf{F}_{max})+\alpha\otimes \mathbf{F}_{avg}+\gamma\otimes\mathbf{F}_{max}, \tag{8}\]
\[\mathbf{M}=\sigma\left(\mathbf{W}_{2}^{n}(\mathrm{ReLU}(\mathbf{W}_{1}^{n}(\mathbf{M}^{\prime })))\right)\,, \tag{9}\]
where \(\alpha\) and \(\gamma\) are trainable parameters which are initialised with 0.5, \(\sigma\) means the sigmoid function, \(\mathbf{W}_{1}^{n}\in\mathbb{R}^{\frac{T}{T}\times T}\) and \(\mathbf{W}_{2}^{n}\in\mathbb{R}^{T\times\frac{T}{T}}\) are trainable parameters independent at each layer, and \(r\) represents the dimension reduction factor. Note, \(\mathbf{M}^{\prime},\mathbf{M}\in\mathbb{R}^{T\times c_{n}\times 1\times 1}\), we share \(\mathbf{W}_{1}^{n}\) and \(\mathbf{W}_{2}^{n}\) on the channel dimension.
In ASA-2, we set \(\mathbf{M}=\mathbf{M}^{\prime}\) directly. Then, we get \(\mathbf{M}_{1}\) and \(\mathbf{M}_{2}\), denoting the important and sub-important channel indexes respectively, by combining the \(y\)-th largest values of two dimensions in \(\mathbf{M}\). Specifically, the pseudo-code of \(g_{2}(\cdot)\) is represented by
Figure 3: Details of ASA-SNN. The ASA module is divided into two steps: (b) Channel separation and (c) Group-based SA, and consists of four functions, \(g_{1}(\cdot)\), \(g_{2}(\cdot)\), \(f_{S1}(\cdot)\), and \(f_{S2}(\cdot)\).
## 5 Experiments
For an event stream, we exploit the frame-based representation [48, 8] as the preprocessing method to convert it into an event frame sequence. Suppose the interval between two frames (i.e., temporal resolution) is \(dt\) and there are \(T\) frames (i.e., timesteps), the length of the input event stream is \(t_{lat}=dt\times T\) millisecond. After processing these divided frames through SNN, a prediction can then be retrieved.
### Experimental Setup
We evaluate our method on five datasets, all generated by recording actions in real scenes. DVS128 Gesture [1], DVS128 Gait-Day [41], and DVS128 Gait-Night [42] were captured by a 128x128 pixel DVS128 camera. As their names imply, Gesture comprises hand gestures, while Gait-day and Gait-night include human gaits in daylight and at night, respectively. DailyAction-DVS [29] and HAR-DVS [40] are acquired by a DAVIS346 camera with a spatial resolution of 346x260, of which HAR-DVS has 300 classes and 107,646 samples and is currently the _largest_ event-based human activity recognition (HAR) dataset. The raw HAR-DVS exceeds 4TB. The authors convert each event stream into frames and randomly sample 8 frames to form a new HAR-DVS for ease of processing.
We execute the baseline for each group of ablation trials, then plug the proposed ASA to run the model again (Table 1). Each group of trials for vanilla and ASA- SNNs employed the same hyper-parameters, training methods, and other training conditions2. In all experiments, we exploit a total of three baselines with different structures. We carefully selected baselines for various datasets to examine the relationship between the spike firing, the dataset, and the network structure. One is the shallow three-layer Conv-based LIF-SNN presented in [48, 50]. The other is a deeper five-layer Conv-based LIF-SNN, following [8]. Finally, the Res-SNN-18 [7] in the SpikingJelly framework3 is used to verify the large datasets.
Footnote 2: Details of datasets and training are given in the Supplementary.
Footnote 3: [https://github.com/fangwei123456/spikingjelly](https://github.com/fangwei123456/spikingjelly)
### Ablation Study for ASA Module
In terms of accuracy and NASFR, We present the main results in Table 1. ASA-SNN achieves higher task accuracy with lower spike firing in all ablation studies. The performance and energy gains are more noticeable, particularly when the network structure is small. For example, in
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Model & Acc.(\%) & NASFR \\ \hline \multirow{4}{*}{Gesture} & LIF-SNN [48] & 91.3 & 0.176 \\ & **+ ASA (Ours)** & **95.2(+3.9)** & 0.038(**-78.4\%**) \\ \cline{2-4} & LIF-SNN [8] & 95.5 & 0.023 \\ & **+ ASA (Ours)** & **97.7(+2.2)** & 0.018(**-21.7\%**) \\ \hline \multirow{2}{*}{Gait-day} & LIF-SNN [48] & 88.6 & 0.214 \\ & **+ ASA (Ours)** & **93.6(+5.0)** & 0.045(**-78.9\%**) \\ \hline \multirow{2}{*}{Gait-night} & LIF-SNN [48] & 96.4 & 0.197 \\ & **+ ASA (Ours)** & **98.6(+2.2)** & 0.126(**-36.0\%**) \\ \hline \multirow{2}{*}{DailyAction -DVS} & LIF-SNN [8] & 92.5 & 0.017 \\ & **+ ASA (Ours)** & **94.6(+2.1)** & 0.013(**-23.5\%**) \\ \hline \multirow{2}{*}{HAR-DVS} & Res-SNN-18 [7] & 45.5 & 0.206 \\ & **+ ASA (Ours)** & **47.1(+1.6)** & 0.183(**-11.2\%**) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main results of vanilla vs. ASA- SNNs (ASA-1). Except for HAR-DVS, reported accuracies are average of five replicates.
Figure 4: Diagram of spatial attention module. As illustrated, the spatial attention exploits two outputs that are pooled along the temporal-channel axis and forward them to a \(3\times 3\) convolution layer.
the Gait-day, plugging the ASA module into a three-layer SNN [48] can reduce the spike counts by 78.9% and improve the performance by +5.0 percent. This is crucial for the deployment of SNN algorithms on neuromorphic chips, which usually have strict memory limitations [3, 34, 36]. The ASA module also performs well on the deep Res-SNN. For instance, on HAR-DVS, the ASA-SNN outperforms the original SNN +1.7 percent while firing fewer spikes. In addition, we provide more ablation studies on the ASA module in the Supplementary.
Although it is beyond the scope of this work, by observing results in Table 1, we raise another complex and important question: "_What factors affect the redundancy of SNN?_" Intuitively, we could exploit NASFR as a redundancy indicator for SNNs. We argue that the NASFR of SNNs depends on various factors, the core of which includes dataset size, network size, spiking neuron types, etc. For instance, on Gesture, the NASFRs in three-layer [48] and five-layer vanilla SNN [8] are 0.176 and 0.023, respectively. Empirically, vanilla SNN's NASFR also affects the function of the ASA module, where SNNs with more redundancy may be easier to reduce spikes. We hope that these observations will inspire more theoretical and optimization work on redundancy.
### Comparison with the State-of-the-Art
In Table 2, we make a comprehensive comparison with prior works in terms of input temporal window and accuracy. Since some datasets were created recently, there is a lack of benchmarks in the field of SNNs. In this paper, we benchmark these datasets using models from the open-source framework SpikingJelly and fill in the corresponding accuracies in Table 1 and Table 2. We can see that on four small datasets, ASA-SNN can produce SOTA or comparable performance. Compared to GCN methods [41, 42] with full input, we observe that SNNs can always achieve higher performance with less input (i.e., smaller \(dt\times T\)). Moreover, on the largest HAR-DVS dataset, our Top-1 accuracy is 47.1% based on Res-SNN-18, which is comparable to the ANN-based benchmark results from 46.9% to 51.2%. This is a reasonable result since SNNs employ binary spikes, generally gaining higher energy efficiency at the expense of accuracy.
### Comparison with Other Attention SNNs
In this work, based on redundancy analysis, we design the ASA module, which only performs spatial attention. As mentioned, the current practice of attention mechanisms in SNNs [50, 30, 58, 49] is dominated by multi-dimensional composition. An easily overlooked fact is that adding attention modules inevitably introduces additional computation. These extra computations are trivial in CNNs, but require special care in SNNs, as otherwise the energy advantage of attention SNNs is lost. Specifically, the energy shift between vanilla and attention SNNs can be computed as
\[\Delta_{E}=E_{MAC}\cdot\Delta_{MAC}-E_{AC}\cdot\Delta_{AC}, \tag{11}\]
where \(E_{MAC}=4.6pJ\) and \(E_{AC}=0.9pJ\) represent the energy cost of Multiply-and-Accumulate (MAC) and AC operation [19], \(\Delta_{MAC}\) and \(\Delta_{AC}\) represent the additional MAC operation and the reduced AC number caused by the attention modules, respectively (detailed energy evaluation is in the Supplementary). We need to try our best to make the benefit (\(E_{AC}\cdot\Delta_{AC}\)) outweigh the cost (\(E_{MAC}\cdot\Delta_{MAC}\))
\begin{table}
\begin{tabular}{c c c c} \hline \hline Dataset & Methods & \(dt\times T\) & Acc. (\%) \\ \hline \multirow{6}{*}{Gesture} & 12 layers CNN [1] & \(1\times 120\) & 92.6 \\ & PLIF-SNN [8] & \(300\times 20\) & 97.6 \\ & Res-SNN-18 [48] & \(375\times 16\) & 97.9 \\ & MA-SNN [50] & \(300\times 20\) & **98.2** \\ \cline{2-4} & This Work & \(300\times 20\) & 97.7 \\ \hline \multirow{6}{*}{Gait-day} & EV-Gait GCN [41] & \(4400\times 1\) & 89.9 \\ & TA-SNN [48] & \(15\times 60\) & 88.6 \\ & 3D GCN [42] & \(1500\times 1\) & 86.0 \\ & MA-SNN [50] & \(15\times 60\) & 92.3 \\ \cline{2-4} & This Work & \(15\times 60\) & **93.6** \\ \hline \multirow{6}{*}{Gait-night} & TA-SNN [48] & \(15\times 60\) & 96.4 \\ & 3D GCN [42] & \(5500\times 1\) & 96.0 \\ \cline{2-4} & This Work & \(15\times 60\) & **98.6** \\ \hline \multirow{6}{*}{DailyAction -DVS} & HMAX-SNN [28] & - & 76.9 \\ & Motion-SNN [29] & - & 90.3 \\ \cline{1-1} & PLIF-SNN [8] & \(120\times 36\) & 92.5 \\ \cline{1-1} \cline{2-4} & This Work & \(120\times 36\) & **94.6** \\ \hline \multirow{6}{*}{HAR-DVS} & Res-CNN-18 [17] & \(T=8\) & 49.2 \\ & ACTION-Net [43] & \(T=8\) & 46.9 \\ \cline{1-1} & TimeSformer [2] & \(T=8\) & 50.8 \\ \cline{1-1} & SlowFast [9] & \(T=8\) & 46.5 \\ \cline{1-1} & ES-Transformer [40] & \(T=8\) & **51.2** \\ \cline{1-1} & Res-SNN-18 [7] & \(T=8\) & 45.5 \\ \cline{1-1} \cline{2-4} & This Work & \(T=8\) & 47.1 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison between the proposed methods and existing SOTA techniques on five event-based vision datasets. Note, all the results of the ANN models in HAR-DVS in this table are taken from [40]. (Bold: the best)
In Table 3, we compare the number of extra parameters and computations needed for various attention modules. We see that ASA module is a cost-effective solution, less \(\Delta_{MAC}\) (just 2.4M), better or comparable performance. For instance, a 78.9% decrease in spike firing results in a 65% reduction in energy consumption in the TCSA-SNN [50] but a 76% reduction in our ASA-SNN. The ASA-2 design, which only adds 114 parameters to obtain a nice performance improvement on Gesture, is highlighted lastly (albeit it is not stable).
### Result Analysis
**Spike patterns in ASA-SNNs.** We re-examine the spike response in ASA-SNNs as we did in Section 3.2. In the spatial granularity, the spike patterns in ASA-SNNs are altered. As shown in Figure 4(a), there are almost no noise features, but some null features without spikes appear. In the temporal granularity, spatial-temporal invariance still holds. As depicted in Figure 4(b), spike features of the same channel at different timesteps are similar.
**Membrane Potential Distribution (MPD) and spike pattern.** We already know that the redundancy in SNNs depends directly on the learned spike patterns. Therefore, we are interested in the question of "how the spike feature changes", which can help us understand the dynamics inside the network and inspire future work. Here we analyze the relationship between the MPD and the spike feature (pattern). We define the following indicator.
**Definition 6.**_Peak-to-threshold distance (PTD)._ We picked out the highest three pillars in membrane potential distribution and obtained the peak interval by averaging these pillars' membrane potential intervals. We then define peak-to-threshold distance as the difference between the center point of the peak interval and the threshold.
**Observation 5.**_The PTD and Variance of membrane potential distribution of a channel can be exploited to measure the quality of the spike feature extracted by this channel to a certain extent. When the value of PTD is near 0 or greater than 0, it indicates that the membrane potential of most spiking neurons on a map is to the right of the threshold. Consequently, most neurons have a relatively high neuron spike firing rate, and intuitively, the pattern learned by the channel is background noise since the key information is usually located within a small area. The variance measures the degree of focus. In the same or similar normal pattern
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Model & Acc. (\%) & Params (\(\uparrow\)) & NASFR & \(\Delta_{MAC}\) (\(\uparrow\)) \\ \hline Vanilla SNN [48] & 88.6 & 2,323,531 & 0.214 & - \\ + SA [50] & 89.5(+0.9) & +294 & 0.091(-57.4\%) & +14.3M \\ + TCSA [50] & 92.3(+3.7) & +24,126 & **0.045(-78.9\%)** & +27.3M \\ + **ASA-1 (Ours)** & **93.6(+5.0)** & +10,914 & **0.045(-78.9\%)** & **+2.6M** \\ + **ASA-2 (Ours)** & 89.6(+1.0) & **+114** & 0.088(-58.9\%) & **+2.6M** \\ \hline \hline Vanilla SNN [48] & 91.3 & 2,323,531 & 0.176 & - \\ + SA [50] & 92.6(+1.3) & +294 & 0.073(-58.5\%) & +14.3M \\ + TCSA [50] & 96.5(**+5.2**) & +24,126 & **0.029(-83.5\%)** & +27.3M \\ + **ASA-1 (Ours)** & 95.2(+3.9) & +10,914 & 0.038(-78.4\%) & **+2.6M** \\ + **ASA-2 (Ours)** & 94.4(**+3.1**) & **+114** & 0.050(-71.6\%) & **+2.6M** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of Different attention modules in three-layer SNN[48] on Gait-day (the above table) and Gesture (the below table) with \(dt=15,T=60\).
Figure 5: Spike features in ASA-SNN. (a) Spike features of different channels at the same timestep. (b) Spike features of the same channel at different timesteps.
Figure 6: When the ASA is plugged, the spike pattern shifts in membrane potential distribution and spike feature.
of the same model, the larger the variance, the clearer the edge information of the learned feature._
Accordingly, as shown in Figure 6, we show how the spike feature follows the MPD. Specifically, in vanilla SNNs (Figure 6**a**), spike features and MPDs in Patterns A and B appear to be in a complementary relationship, corresponding to perfect focus on the background and object regions, respectively. If the PTD is maintained constant, as the variance gradually decreases, the information in the edge regions of the background and object begins to blur, as shown in Patterns C and D.
Then we compare the shifts in spike features of vanilla and ASA-SNNs. Obviously, peak regions of the MPDs across all channels in ASA-SNNs are located to the left of the threshold (Figure 6**b**), i.e., \(PTD<0\). This indicates that one channel does not fire a lot of spikes after the ASA module has optimized the MPD. That is, the MPD is highly compact, which implies that the edge information of the spike feature is clearer.
By combining the two indicators PTD and variance, we can quickly determine what a "good" spike feature's MPD should be. For instance, as shown in Pattern B of ASA-SNNs, both the PTD and variance values should fall within an appropriate range, neither too high nor too low.
**Information loss.** As discussed in [14, 13], a good MPD can reduce information loss, which arises from the quantization error (QE) introduced by converting the analog membrane potential into binary spikes. Similar to [14, 13], we define the QE as the square of the difference between the membrane potential and its corresponding quantization spike value. The proposed ASA module concurrently optimizes the PTD and variance of MPDs in vanilla SNNs, which significantly reduces the information loss caused by the spike quantization (see Table 4). It is evident from a comparison of Figure 6**a** and **b** that each channel's MPD grows thinner (the variance becomes smaller). In this work, the reduced variance implies that the edge information in the spike feature is clearer from the perspective of feature visualization. By contrast, from the perspective of QE, it implies that the information loss becomes less.
## 6 Conclusions
In this work, three key questions are exploited to analyze the redundancy of SNNs, which are usually ignored in other prior works. To answer these questions, we present a new perspective on the relationship between the spatio-temporal dynamics and the spike firing. These findings inspired us to develop a simple yet efficient advanced spatial attention module for SNNs, which harnesses the inherent redundancy in SNNs by optimizing the membrane potential distribution. Experimental results and analysis show that the proposed method can greatly reduce the spike firing and further improve performance. The new insight onto SNN redundancy not only reveals the unique advantages of spike-based neuromorphic computing in terms of bio-plausibility, but may also bring some interesting enlightenment to the following-up research on efficient SNNs.
## Acknowledgement
This work was partially supported by National Science Foundation for Distinguished Young Scholars (62325603), and National Natural Science Foundation of China (62236009, U22A20103), and Beijing Natural Science Foundation for Distinguished Young Scholars (JQ21015).
|
2303.13213 | Stochastic Graph Neural Network-based Value Decomposition for MARL in
Internet of Vehicles | Autonomous driving has witnessed incredible advances in the past several
decades, while Multi-Agent Reinforcement Learning (MARL) promises to satisfy
the essential need of autonomous vehicle control in a wireless connected
vehicle networks. In MARL, how to effectively decompose a global feedback into
the relative contributions of individual agents belongs to one of the most
fundamental problems. However, the environment volatility due to vehicle
movement and wireless disturbance could significantly shape time-varying
topological relationships among agents, thus making the Value Decomposition
(VD) challenging. Therefore, in order to cope with this annoying volatility, it
becomes imperative to design a dynamic VD framework. Hence, in this paper, we
propose a novel Stochastic VMIX (SVMIX) methodology by taking account of
dynamic topological features during the VD and incorporating the corresponding
components into a multi-agent actor-critic architecture. In particular,
Stochastic Graph Neural Network (SGNN) is leveraged to effectively capture
underlying dynamics in topological features and improve the flexibility of VD
against the environment volatility. Finally, the superiority of SVMIX is
verified through extensive simulations. | Baidi Xiao, Rongpeng Li, Fei Wang, Chenghui Peng, Jianjun Wu, Zhifeng Zhao, Honggang Zhang | 2023-03-23T12:14:04Z | http://arxiv.org/abs/2303.13213v1 | # Stochastic Graph Neural Network-based Value Decomposition for MARL in Internet of Vehicles
###### Abstract
Autonomous driving has witnessed incredible advances in the past several decades, while Multi-Agent Reinforcement Learning (MARL) promises to satisfy the essential need of autonomous vehicle control in a wireless connected vehicle networks. In MARL, how to effectively decompose a global feedback into the relative contributions of individual agents belongs to one of the most fundamental problems. However, the environment volatility due to vehicle movement and wireless disturbance could significantly shape time-varying topological relationships among agents, thus making the Value Decomposition (VD) challenging. Therefore, in order to cope with this annoying volatility, it becomes imperative to design a dynamic VD framework. Hence, in this paper, we propose a novel Stochastic VMIX (SVMIX) methodology by taking account of dynamic topological features during the VD and incorporating the corresponding components into a multi-agent actor-critic architecture. In particular, Stochastic Graph Neural Network (SGNN) is leveraged to effectively capture underlying dynamics in topological features and improve the flexibility of VD against the environment volatility. Finally, the superiority of SVMIX is verified through extensive simulations.
Autonomous vehicle control, multi-agent reinforcement learning, value decomposition, stochastic graph neural network
## I Introduction
In recent years, the unprecedented development of artificial intelligence (AI) brings the self-driving vehicles [2, 3, 4] into the spotlight, and these intelligent vehicles form an Internet of Vehicles (IoV). Equipped with the capability for environmental perception [5], the vehicles try to respond to the observation of the surroundings and find an optimal trajectory between two sites [6, 7] with calibrated self-decision control and traffic congestion avoidance [8, 9]. Additionally, there emerges some research interest towards traffic signal control [10] or fleet control [11] in IoV as well. Nevertheless, as for distributed deployment of intelligent vehicles in IoV, such intricate cases (e.g., traffic control) require them to collaborate on top of reliable communication, which further constitutes a Multi-Agent System (MAS) and catalyses the research progress in a myriad of related studies (e.g., traffic control and prediction) [12, 13]. In particular, Multi-Agent Reinforcement Learning (MARL), which incorporates Deep Reinforcement Learning (DRL) into MAS, emerges by astutely learning through the trial-and-error interaction between multiples agents (i.e., vehicles) and the complicated IoV environment, and promises to yield smart policies for the formulated Markov Decision Process (MDP).
Traditionally, in Independent Q-Learning (IQL) [14], one typical kind of MARL, each individual agent learns its policy by regarding other agents as parts of the environment, and often experiences a non-stationary environment, as the agents always update their policies independently during the learning. Consequently, the non-stationary issue impedes the optimization of agents' policies. Instead, mutual communication shall be considered for cooperative agents to reach holistic consistency. On the other hand, it is natural for MAS to only observe a global reward from the environment. Thus it becomes essential to tackle with the agent heterogeneity and reward assignment issue. [15].
In recent years, algorithms with Centralized Training and Decentralized Execution (CTDE) have become the centerpiece of MARL as they can somewhat handle the aforementioned two problems of IQL (i.e., non-stationary learning, and node heterogeneity). As its name implies, CTDE can be divided into the training phase and execution phase. In particular, in the training phase, agents can implicitly observe the global information of the environment in a centralized manner, so as to guarantee the communication among agents and tackle the non-stationary problem. Subsequently, in the decentralized execution phase, each agent capably makes decisions based on its local observation only. Typical examples of CTDE include MADDPG, COMA, etc [15, 16]. Nevertheless, as the number of agents increases, the centralized state-action value function in CTDE algorithms such as COMA suffers from an exponential increase of the action space as well as the awful growth of computational complexity. Therefore, the Value Decomposition (VD)-based algorithms [17, 18, 19, 20] are proposed to decompose the centralized value function into individual value functions, and attempt to learn an implicit representation for individual reward, thus decreasing the computational complexity in CTDE. Regretfully, these methods mostly concentrate on learning from agents with the invariant communication topology [20, 21], but founder under dynamic environments entailing uncertainty and perturbation due to fluctuating connections and dynamically changing topologies.
On the other hand, the advent of Graph Neural Network (GNN) [22] makes it possible to capture the underlying topological features of vehicle-formed graphs. For example, the spectral-based GNNs [23] can filter the external noise and extract features from the transformed frequency domain of signals. However, most of the spectral-based GNNs concen
Introduction
The study of the evolution of a complex network is a fundamental problem in the field of IoT. The fundamental problem of the IoT is to determine the evolution of a network, and to determine the evolution of a network. The fundamental problem of the IoT is to determine the evolution of a network, and to determine the evolution of a network. The fundamental problem of the IoT is to determine the evolution of a network, and to determine the evolution of a network, and to determine the evolution of a network, and to determine the evolution of a network, and to determine the evolution of a network, to determine the evolution of a network, and to determine the evolution of a network, to determine the evolution
## III Preliminaries and System Model
In this section, we briefly introduce the related knowledge of MDP and VD-based methods, and present the formulated system model to apply MARL for the autonomous vehicle control.
### _Preliminaries_
Generally, an MARL task is formulated as a Decentralized Partially Observable Markov Decision Process (Dec-POMDP) [41], which is defined as the tuple \(\langle\mathcal{I},\mathcal{S},\mathcal{A},\mathcal{P},\Omega,\mathcal{R}, \mathcal{O},\gamma\rangle\). \(\mathcal{I}\) represents the set of \(N\) agents, \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space for a single agent and \(\mathcal{A}:=\mathcal{A}^{N}\) is the joint action space. The joint action \(\boldsymbol{a}=\{a^{(1)},a^{(2)},\cdots,a^{(N)}\}\) taken at the current state \(s\) results in the next state \(s^{\prime}\) through a transition of environment according to the transition function \(\mathcal{P}(s^{\prime}|s,\boldsymbol{a}):\mathcal{S}\times\mathcal{A}\times \mathcal{S}\rightarrow[0,1]\). Owing to the scant ability of perception against the colossal environment, agent \(i\) gets a local observation \(o^{(i)}\in\Omega\) via the observation function \(\mathcal{O}(o^{(i)}|s,i):\mathcal{S}\times\mathcal{I}\times\Omega\rightarrow[ 0,1]\) instead of \(s\) at each time-step. All agents share a global reward function \(\mathcal{R}(s,\boldsymbol{a}):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) and \(\gamma\) denotes the discount factor.
Furthermore, in a Dec-POMDP, agent \(i\) adopts an action according to its policy \(\pi^{(i)}(\cdot|o^{(i)})\), which denotes the conditional probability distribution of taking action \(a^{(i)}\in\mathcal{A}\) on \(o^{(i)}\). \(\boldsymbol{a}\) is adopted according to the probability \(\boldsymbol{\pi}(\boldsymbol{a}|\boldsymbol{o})=\prod_{i=1}^{N}\pi^{(i)} \left(a^{(i)}|o^{(i)}\right)\) where \(\boldsymbol{\pi}\) denotes the joint policy and \(\boldsymbol{o}=\{o^{(1)},\cdots,o^{(N)}\}\) implies the joint observation. In order to coordinate all agents for maximizing the discounted accumulated return \(J=\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}\left(s_{t},\boldsymbol{a}_{t}\right)\), a joint state-action value function is defined as the expected discounted accumulated return starting from \(s_{t}\) and \(\boldsymbol{a}_{t}\) of time \(t\), that is
\[Q^{\boldsymbol{\pi}}(s_{t},\boldsymbol{a}_{t})=\mathbb{E}_{\mathcal{P}}\left[ \sum_{k=t}^{\infty}\gamma^{k-t}\mathcal{R}\left(s_{k},\boldsymbol{a}_{k} \right)|s_{t},\boldsymbol{a}_{t},\boldsymbol{\pi}\right], \tag{1}\]
where \(\mathbb{E}(\cdot)\) denotes an expectation operator. It can be naturally observed that the function has an exponential computational complexity due to the joint action \(\boldsymbol{a}_{t}\).
To deal with this scalability issue while pursuing the optimal joint policy \(\boldsymbol{\pi}^{*}=\arg\max_{\boldsymbol{\pi}}Q^{\boldsymbol{\pi}}(s_{t}, \boldsymbol{a}_{t})\), instead of adopting a joint state-action value function as in (1), VD-based methods [17, 18, 19, 20] exhibit an individual state-action value function \(Q^{\boldsymbol{\pi}^{(i)}}(o^{(i)}_{t},a^{(i)}_{t})=\mathbb{E}_{\mathcal{P}} \left[\sum_{k=t}^{\infty}\gamma^{k-t}r^{(i)}_{k}|o^{(i)}_{t},a^{(i)}_{t},\pi^{ (i)}\right]\) for agent \(i\), where \(r^{(i)}_{k}\) is a learned implicit representation of the individual reward decomposed from \(\mathcal{R}(s_{k},\boldsymbol{a}_{k})\) for agent \(i\). Moreover, the individual state-action value function reflects different contributions among agents.
Likewise, the expected \(J\) from \(s_{t}\) can also be represented by the joint state value function
\[V^{\boldsymbol{\pi}}(s_{t})=\mathbb{E}_{\mathcal{P},\boldsymbol{a}_{t}\sim \boldsymbol{\pi}(\cdot|o_{t})}\left[\sum_{k=t}^{\infty}\gamma^{k-t}\mathcal{R} \left(s_{k},\boldsymbol{a}_{k}\right)|s_{t},\boldsymbol{\pi}\right]. \tag{2}\]
Exactly the joint state value function \(V^{\boldsymbol{\pi}}(s_{t})\) is a comprehensive evaluation of \(J\) at \(s_{t}\), so it is commonly used for calculating the advantage value of different actions at \(s_{t}\) as a baseline [42]. Using VD, we can also represent the set of individual state value functions as \(V^{\pi^{(i)}}(o^{(i)}_{t})=\mathbb{E}_{\mathcal{P},\boldsymbol{a}_{t}\sim \pi^{(i)}(\cdot|o^{(i)}_{t})}\left[\sum_{k=t}^{\infty}\gamma^{k-t}r^{(i)}_{k}|o^ {(i)}_{t},\pi^{(i)}\right]\), which implies a more reasonable approximation of the joint value function and speeds up the training.
### _System Model_
We have proposed a mixed autonomy traffic system model that permits vehicles to flow in and out of road, giving rise to the variable amount of vehicles. At time-step \(t\), there are totally \(M_{t}\) vehicles (including \(N\) DRL-driven vehicles and \(M_{t}\geq N\)) as well as intersections or ramps as depicted in Fig. 1. For vehicle \(i\), it has the information such as the velocity \(v^{(i)}_{t}\in\mathbb{R}\) and the position \(z^{(i)}_{t}=(x^{(i)}_{t},y^{(i)}_{t})\in\mathbb{R}^{2}\) of itself at time \(t\). Meanwhile, \(v^{(i)}_{t}\) is controlled via the acceleration \(u^{(i)}_{t}\in\mathbb{R}\) decided by vehicle \(i\) itself. It's also necessary for vehicles to obey some traffic rules. For example, at an intersection, the vehicle has to consider an appropriate passing order as well as control its direction and velocity so as to avoid collisions. Based on the above definition, we have the elements of the Dec-POMDP as follows.
State and observationIn our scenario, each vehicle can receive the information of other accessible vehicles through Vehicle-to-Vehicle (V2V) connections. Without loss of generality, assuming that vehicle \(i\) is only able to communicate with vehicles \(j\) and \(k\) at time \(t\), the observation of vehicle \(i\) can be represented as \(o^{(i)}_{t}=\{v^{(i)}_{t},z^{(i)}_{t},v^{(i)}_{t},z^{(j)}_{t},v^{(i)}_{t},z^{(k)}_ {t}\}\). The state is represented by \(s_{t}=\{v^{(1)}_{t},z^{(1)}_{t},v^{(2)}_{t},z^{(2)}_{t},\cdots,v^{(M_{t})}_{t},z ^{(M_{t})}_{t}\}\), which is consisted of the velocities and positions of all the vehicles. Intuitively, the observation \(o^{(i)}_{t}\) is a part of \(s_{t}\), which can be mathematically determined by \(\mathcal{O}(o^{(i)}_{t}|s_{t},i)\).
ActionAt time \(t\), vehicle \(i\) adopts an action represented as \(a^{(i)}_{t}=\{u^{(i)}_{t},q^{(i)}_{t}\}\) where \(q^{(i)}_{t}\) is an extra action that indicates whether the vehicle changes to another lane or direction. In particular, the \(N\) DRL-driven vehicles select their
Fig. 1: The traffic control scenario with ramps, intersections and vehicles with a constrained communication range.
actions according to the learned policies instead of the others with fixed policies.
RewardThe goal of MARL for the vehicle is to maintain a velocity as close to the desired velocity \(v_{d}^{(i)}\) as possible on the basis of no undesirable collisions. Accordingly, the reward \(r_{t}:=\mathcal{R}(s_{t},\mathbf{a}_{t})\) is set as
\[r_{t}=\begin{cases}\frac{V_{d}-\sqrt{\sum_{i=1}^{M_{t}}(v_{d}^{(i)}-v_{t}^{(i)} )^{2}}}{V_{d}+C_{1}}\\ -\alpha D(z^{(1)},\cdots,z^{(M_{t})}),&\text{if no collision};\\ 0,&\text{otherwise},\end{cases} \tag{3}\]
where \(V_{d}=\sqrt{\sum_{i=1}^{M_{t}}\left(v_{d}^{(i)}\right)^{2}}\) and \(C_{1}\) is a very small constant. \(D(z^{(1)},\cdots,z^{(M_{t})})\) is the penalty term with a safety coefficient \(\alpha\) that hopes the vehicle to hold a safe distance with its preceding vehicle. (3) implies that MARL encourages the vehicle to choose a satisfactory velocity which deviates trivially from \(v_{d}^{(i)}\) while spares no efforts to avoid collisions at time \(t\). Notably, all the \(M_{t}\) vehicles participate in the calculation of the reward.
On the other hand, the wireless-connected intelligent vehicles in IoV constitute a time-varying undirected graph \(\mathcal{G}_{t}=\{\mathcal{V},\mathcal{E}_{t}\}\), where each agent is denoted as a vertex and a communication link are regarded as an edge. In other words, \(\mathcal{V}=\{1,2,\cdots,N\}\) and an undirected edge in \(\mathcal{E}_{t}\) corresponds to a communication link between two vehicles. It is worth recalling that the mobility of vehicles will frequently change the connections among vehicles due to the communication range and thus induce topological changes. For example, if a vehicle meets another vehicle at an intersection, it's obvious that a strong connection between them could be set up in order to avoid collisions. Additionally, the time-varying amount of vehicles also results in the fluctuation of the number of vertices in \(\mathcal{G}_{t}\). For simplicity and without losing the rationality, we will treat \(\mathcal{G}_{t}\) as a graph with fixed vertices.
### _Problem Formulation_
Following the system model, we expect the DRL-driven vehicle to maintain a regular speed that close to its desired velocity on the premise of avoiding collisions. Therefore we have proposed a system utility function \(U\) as the objective of policy optimization, which can be mathematically formulated as
\[\max_{\mathbf{\pi}}\ U=\mathbb{E}_{t}\left[r_{t}|\mathbf{\pi}\right]. \tag{4}\]
In other words, the agents aim to learn a joint policy \(\mathbf{\pi}\) under the guidance of maximizing the system utility (i.e., the expected reward). Notably, for CTDE, the optimization of \(\mathbf{\pi}\) is equivalent to learn the optimal individual policies of
Fig. 2: The illustration of the SVMIX algorithm for autonomous vehicle control.
agents [17]. Therefore, (4) implies to devise an appropriate VD method for CTDE in volatile environment, so as to derive an individual policy of each agent.
## IV Architecture and Designation of Svmix
In this section, we describe the architecture and components of the proposed SVMIX algorithm as depicted in Fig. 2. First, we use PPO as the basis of each individual DRL agent. Afterwards, in order to decompose the proper individual value functions of the \(N\) agents from the total (or joint) value function through the VMIX network, we highlight how to leverage SGNN[25] to effectively capture the topological features from the dynamic graph \(\mathcal{G}_{t}\).
Next we will introduce these essential components (i.e., PPO, SGNN and VMIX) in SVMIX as well as the training procedure. As SGNN is the centerpiece of our algorithm, we also mathematically explain that SGNN can help PPO agents to learn the average optimal solutions.
### Ppo-based RL Agents
Belonging to one of the most famous Policy Gradient (PG) algorithms extensively used for continuous action control, PPO is composed of an actor network and a critic network, where the former is responsible for taking actions in accordance with its learned policy while the latter produces an approximated (individual) value function. Notably, in this paper, we consider a DRL-driven vehicle and a PPO-based agent is equivalent.
As illustrated in the lower part of Fig. 2, for agent \(i\), the actor outputs the mean \(\mu_{i}(o_{t}^{(i)})\) and the standard deviation \(\sigma_{i}(o_{t}^{(i)})\) of a normal distribution \(\mathcal{N}(\mu_{i}(o_{t}^{(i)}),\sigma_{i}^{2}(o_{t}^{(i)}))\) (i.e. the policy \(\pi^{(i)}(\cdot|o_{t}^{(i)})\)) through a Multi-Layer Perception (MLP) taking \(o_{t}^{(i)}\) as input with nonlinear activation functions (omitted from Fig. 2 for simplicity). Sequentially, following \(\pi^{(i)}(\cdot|o_{t}^{(i)})\), the PPO agent samples an action \(a_{t}^{(i)}\) from \(\mathcal{N}(\mu_{i}(o_{t}^{(i)}),\sigma_{i}^{2}(o_{t}^{(i)}))\) at time \(t\) for exploration. Besides, the critic approximates the state value function and produces a state value \(V^{(i)}(o_{t}^{(i)})\) through another MLP.
After that, the joint action \(\mathbf{a}_{t}\) is taken and the environment correspondingly enters the next state \(s_{t+1}\), thus shifting the next joint observation to \(\mathbf{o}_{t+1}\) and yielding a global reward \(r_{t}\). Notably, during this procedure, to better represent the total value function related to the dynamic graph, individual state values are further processed through SGNN-based feature capturing and aggregation.
### _Topological Feature Capturing and Filtering by Sgnn_
As mentioned before, the vehicles can form an undirected graph \(\mathcal{G}_{t}=\{\mathcal{V},\mathcal{E}_{t}\}\). Thus motivated by the extraordinary achievements of GNN to tackle the topology issues, we propose to apply graph signal processing techniques for feature capturing, by treating each individual state value \(V^{(i)}(o_{t}^{(i)})\) as the signal of one corresponding vertex in \(\mathcal{G}_{t}\). Furthermore, we argue that SGNN makes a good complement to deal with the dynamic graph in traffic control.
The upper-right part of Fig. 2 demonstrates that SGNN consists of stochastic graph filters and the READOUT mechanism for output integration. SGNN mainly adopts stochastic graph filters to fit the environment volatility based on the Random Edge Sampling (RES) model [24] with the underlying graph \(\mathcal{G}^{+}=\{\mathcal{V},\mathcal{E}^{+}\}\), wherein \(\mathcal{E}^{+}\) encompasses all sets of edges \(\mathcal{E}_{t}\) for all \(t\) (i.e., \(\mathcal{E}_{t}\subseteq\mathcal{E}^{+},\forall t\)). Besides, we define the adjacent matrix \(\mathbf{A}\in\mathbb{R}^{N}\times\mathbb{R}^{N}\) where an entry \(\mathbf{A}_{m,n}\) is nonzero if and only if \((m,n)\in\mathcal{E}^{+}\), as well as the Laplacian matrix \(\mathbf{L}=\operatorname{diag}(\mathbf{A}\mathbf{1})-\mathbf{A}\) for \(\mathcal{G}^{+}\). In particular, a random sub-graph \(\mathcal{G}_{k}^{+}=\{\mathcal{V},\mathcal{E}_{k}^{+}\}\) will be constructed by holding the original vertices of \(\mathcal{G}^{+}\) but sampling the edges from \(\mathcal{E}^{+}\) following a Bernoulli distribution with a success probability \(p\), that is,
\[\Pr\left[(m,n)\in\mathcal{E}_{k}^{+}\right]=p,\quad\text{for all }(m,n)\in \mathcal{E}^{+}. \tag{5}\]
Through RES, we can emulate dynamic graphs where the communication links (edges) vary frequently. Therefore, to exert the advantage of centralized training, we actually let \(\mathcal{G}^{+}\) be a generalized fixed graph encompassing all potential connections between vertices, so as to ensure enough exploration.
In detail, Fig. 3 demonstrates the corresponding structure of a \(K\)-order stochastic graph filter. Based on the input \(\mathbf{V}(\mathbf{o}_{t})=\{V^{(1)}(o_{t}^{(1)}),\cdots,V^{(N)}(o_{t}^{(N)})\}\in \mathbb{R}^{N}\) from PPO agents as well as the stochastic sub-graphs \(\mathcal{G}_{k}^{+}\) with the adjacent matrix \(\mathbf{A}_{k}\), the information among vertices recursively diffuses as \(\mathbf{V}_{k}(\mathbf{o}_{t})=\mathbf{S}_{k}\mathbf{V}_{k-1}(\mathbf{o}_{t}),k\in\{1,\cdots,K\}\) with \(\mathbf{V}_{0}(\mathbf{o}_{t})=\mathbf{V}(\mathbf{o}_{t})\) and either \(\mathbf{S}_{k}=\mathbf{A}_{k}+\mathbf{I}\) (where \(\mathbf{I}\) denotes the identity matrix) or \(\mathbf{S}_{k}=\mathbf{L}_{k}\). Therefore, having \(\mathbf{S}_{0}=\mathbf{I}\) and unrolling the recursion, the intermediate signal can be represented as
\[\begin{split}\mathbf{V}_{k}(\mathbf{o}_{t})=\mathbf{S}_{k}\mathbf{V}_{k-1}( \mathbf{o}_{t})&=(\mathbf{S}_{k}\mathbf{S}_{k-1}\cdots\mathbf{S}_{0}) \mathbf{V}(\mathbf{o}_{t})\\ &:=\mathbf{S}_{k:0}\mathbf{V}(\mathbf{o}_{t}),\end{split} \tag{6}\]
where \(\mathbf{S}_{k:0}\) is defined as \(\mathbf{S}_{k:0}:=\mathbf{S}_{k}\mathbf{S}_{k-1}\cdots\mathbf{S}_{0}\). Therefore, \(\mathbf{V}_{k}(\mathbf{o}_{t})\) aggregates the information of \(\mathbf{V}(\mathbf{o}_{t})\) through the random sequence of sub-graphs \(\mathcal{G}_{1}^{+},\cdots,\mathcal{G}_{k}^{+}\). Ultimately, the output \(\mathbf{u}\) of the \(K\)-order stochastic graph filter with filter coefficients \(h_{k}\) can be formulated as
\[\mathbf{u}=\sum_{k=0}^{K}h_{k}\mathbf{V}_{k}(\mathbf{o}_{t})=\sum_{k=0}^{K}h_{k}\mathbf{S} _{k:0}\mathbf{V}(\mathbf{o}_{t}):=\mathbf{H}\left(\mathbf{S}_{K:0}\right)\mathbf{V}(\mathbf{o}_ {t}). \tag{7}\]
Specifically, \(\mathbf{H}\left(\mathbf{S}_{K:0}\right)\) denotes the stochastic graph filter with learnable parameters \(h_{0},\cdots,h_{K}\).
Furthermore, we can use \(F\) parallel stochastic graph filters \(\mathbf{H}_{f}\left(\mathbf{S}_{K:0}\right)\) with coefficients \(h_{fx}\in\mathcal{H},\forall f\in\{1,\cdots,F\},k\in\{1,\cdots,K\}\) to process \(\mathbf{V}(\mathbf{o}_{t})\), where \(\mathcal{H}\) is the set of all the
Fig. 3: A stochastic graph filter of \(K\) orders. Here we only show how the information of the green vertex is updated via green edges.
filter coefficients. In particular, for the \(f\)-th stochastic graph filter, the output can be given on top of (7) as
\[\mathbf{V}_{f}^{\prime}(\mathbf{o}_{t})=\sigma\left[\mathbf{u}_{f}\right] =\sigma\left[\mathbf{H}_{f}\left(\mathbf{S}_{K:0}\right)\mathbf{V}(\mathbf{o}_{ t})\right] \tag{8}\] \[=\sigma\left[\sum_{k=0}^{K}h_{fk}\mathbf{S}_{f:k:0}\mathbf{V}(\mathbf{o}_ {t})\right],\]
where \(\sigma(\cdot)\) is the nonlinear activation function ReLU.
Subsequently, the mechanism \(\mathtt{READOUT}:\mathbb{R}^{N}\times\mathbb{R}^{F}\rightarrow\mathbb{R}^{N}\) is implemented to integrate the outputs of the stochastic graph filters through a two-layer MLP as
\[\mathbf{V}_{\mathrm{agg}}(\mathbf{o}_{t})=\mathtt{READOUT}(\{\mathbf{V}_{1}^{\prime}(\bm {o}_{t}),\cdots,\mathbf{V}_{F}^{\prime}(\mathbf{o}_{t})\}). \tag{9}\]
Finally, Sgnn captures the dynamic topological features through stochastic graph filters and random sub-graphs generated by RES with learnable filter coefficients \(\mathcal{H}\). That is, the filtered state values \(\mathbf{V}_{\mathrm{agg}}(\mathbf{o}_{t})\) involve the dynamic topological features. Notably, consistent with methodology of CTDE, SGNN requires the information in every vertex as input in the centralized training phase, while it can be neglected at the decentralized execution phase.
### _The Role of Sgnn_
Intuitively, RES in SGNN brings substantial uncertainty to capture underlying topological features. Therefore, it's necessary to clarify the role of SGNN in SVMIX with theoretical analysis. Concentrating on the stochastic graph filter defined in (7) on the basis of RES, for a given graph \(\mathcal{G}^{+}\) and an input \(\mathbf{V}(\mathbf{o}_{t})\), the expected output can be given by
\[\mathbb{E}\left[\mathbf{u}\right]=\mathbb{E}\left[\mathbf{H}\left(\mathbf{S}_{K: 0}\right)\mathbf{V}(\mathbf{o}_{t})\right]=\mathbb{E}\left[\sum_{k=0}^{K}h_{k}\mathbf{ S}_{k:0}\right]\mathbf{V}(\mathbf{o}_{t}). \tag{10}\]
Let \(\mathbf{S}_{k}=\overline{\mathbf{S}}+\Delta_{k}\) where \(\overline{\mathbf{S}}=\mathbb{E}\left[\mathbf{S}_{k}\right]\) and \(\Delta\) represents the error matrix, we further get \(\mathbb{E}\left[\Delta_{k}\right]=0\) and
\[\mathbb{E}\left[\sum_{k=0}^{K}h_{k}\mathbf{S}_{k:0}\right] =\sum_{k=0}^{K}h_{k}\mathbb{E}\left[\mathbf{S}_{k}\mathbf{S}_{k-1 }\cdots\mathbf{S}_{0}\right] \tag{11}\] \[=\sum_{k=1}^{K}h_{k}\mathbb{E}\left[\left(\overline{\mathbf{S}}+ \Delta_{k}\right)\cdots\left(\overline{\mathbf{S}}+\Delta_{1}\right)\right]+h _{0}\mathbf{I}\] \[=\sum_{k=1}^{K}h_{k}\overline{\mathbf{S}}^{k}+h_{0}\mathbf{I}\]
as \(\mathbb{E}\left[\Delta_{m}\Delta_{n}\right]=\mathbb{E}\left[\Delta_{m}\right] \mathbb{E}\left[\Delta_{n}\right]=0\) if \(m\neq n\) given the mutual independence between the samplings.
1. If \(\mathbf{S}_{k}=\mathbf{L}_{k}\), for an entry \(\overline{\mathbf{S}}_{m,n}\) in \(\overline{\mathbf{S}}\), we have \[\overline{\mathbf{S}}_{m,n}=\begin{cases}-p\mathbf{A}_{m,n},&m\neq n;\\ pd_{m},&\text{else},\end{cases}\] (12) where \(d_{m}\) is represented by the degree of vertex \(m\). So \(\overline{\mathbf{S}}=p\mathbf{L}\), and finally we get \(\mathbb{E}\left[\mathbf{u}\right]=\left(\sum_{k=1}^{K}h_{k}p^{k}\mathbf{L}^{k}+h _{0}\mathbf{I}\right)\mathbf{V}(\mathbf{o}_{t})\). As \(\mathbf{L}\) is a real symmetry matrix, it can be transformed into \(\mathbf{L}=\mathbf{U}\Sigma\mathbf{U}^{T}\) through eigendecomposition where \(\Sigma=\mathrm{diag}\left(\lambda_{1},\lambda_{2},\cdot,\lambda_{N}\right)\) with \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq\lambda_{N}\). Notably, \(\lambda_{N}\equiv 0\) as \(\mathbf{L}\) always has an eigenvector \(\mathbf{1}\) corresponding to the eigenvalue \(0\), and the algebraic multiplicity of \(\lambda_{N}\) equals to the number of connected components in \(\mathcal{G}^{+}\)[43]. Considering the influence of filtering coefficients \(h_{k}\) and probability \(p\), \(\sum_{k=1}^{K}h_{k}p^{k}\mathbf{L}^{k}+h_{0}\mathbf{I}\) will be a full-rank matrix (e.g., if \(h_{k}>0,\forall k=0,1,\cdots,K\), the eigenvalues will be \(\tilde{\lambda}_{1}\geq\tilde{\lambda}_{2}\geq\cdots\geq\tilde{\lambda}_{N}=h _{0}>0\)).
2. If \(\mathbf{S}_{k}=\mathbf{A}_{k}+\mathbf{I}\), we have \(\overline{\mathbf{S}}_{m,n}\) as \[\overline{\mathbf{S}}_{m,n}=\mathbb{E}\left[\mathbf{A}_{k}+\mathbf{I}\right]_ {m,n}=\begin{cases}p\mathbf{A}_{m,n},&m\neq n;\\ 1,&\text{else}.\end{cases}\] (13) We can also express the real symmetry matrix as \(\overline{\mathbf{S}}=\mathbf{U}\Sigma\mathbf{U}^{T}\) via eigendecomposition where \(\Sigma=\mathrm{diag}\left(\lambda_{1},\lambda_{2},\cdot,\lambda_{N}\right)\). Assuming that \(\mathcal{G}^{+}\) is a fully connected graph, there will be \(\lambda_{1}=1+(N-1)p\) and \(\lambda_{2}=\lambda_{3}=\cdots=\lambda_{N}=1-p\). Thus, choosing a feasible \(p\) and taking \(h_{k}\) into account, \(\sum_{k=1}^{K}h_{k}\overline{\mathbf{S}}^{k}+h_{0}\mathbf{I}\) will be a full-rank matrix as well.
Based on the above analysis, we have proved that \(\mathbb{E}\left[\mathbf{H}\left(\mathbf{S}_{K:0}\right)\right]\) can be full-rank with appropriate filtering coefficients \(h_{k}\) and \(p\). Hence, for a given expected output \(\mathbb{E}\left[\mathbf{u}\right]\), there must be a nontrivial solution \(\mathbf{V}(\mathbf{o}_{t})\) that satisfies (10) with learned \(\mathcal{H}\).
Together with (8) and (9) which show the relationship between \(\mathbf{u}\) and \(\mathbf{V}_{\mathrm{agg}}(\mathbf{o}_{t})\), it becomes reasonable to use SGNN for feature capturing because \(\mathbf{V}(\mathbf{o}_{t})\) can finally be mapped to \(V_{\mathrm{agg}}^{(i)}(\mathbf{o}_{t})\) where
\[V_{\mathrm{agg}}^{(i)}(\mathbf{o}_{t}) =\left[\mathbf{V}_{\mathrm{agg}}(\mathbf{o}_{t})\right]_{i} \tag{14}\] \[=\mathbb{E}_{\mathcal{P},\mathbf{a}_{t}\sim\mathbf{\pi}\left(\cdot\left|\bm {o}_{t}\right.\right)}\left[\sum_{k=t}^{\infty}\gamma^{k-t}r_{k}^{(i)}|\mathbf{o}_{t}, \mathbf{\pi}\right].\]
In other words, the individual value functions take the observations and policies of other agents as well as the dynamics of topology into account in an effort to handle the nonstationarity through stochastic graph filters after sufficient training.
By the way, as \(\mathcal{G}^{+}\) is generalized for all possible \(\mathcal{G}_{t}\) through RES during training, SGNN is able to explore the topologies that may appear, thus learning stable graph signal filters. Therefore, SVMIX ultimately obtain the capability of anti-disturbance.
### _Value Decomposition with Vmix_
Similar to the mixing network in QMIX[17], the VMIX network leverages the filtered signals from SGNN and evaluates the contribution of each agent to generate the aggregated total state value \(V_{\mathrm{tot}}\left(\mathbf{o}_{t}\right)\) on the basis of the global state \(s_{t}\). VMIX consists of multiple MLPs which take \(s_{t}\) as input and output the weights and biases for linear transformations. Finally, the aggregated state value can be formulated as
\[V_{\mathrm{tot}}(\mathbf{o}_{t})=\mathbf{w}_{2}^{\top}\mathbf{V}_{\mathrm{agg}}^{\prime} \left(\mathbf{o}_{t}\right)+b_{2}, \tag{15}\]
where \(\mathbf{V}_{\mathrm{agg}}^{\prime}(\mathbf{o}_{t})=f\left[\mathbf{W}_{1}\mathbf{V}_{\mathrm{agg}} \left(\mathbf{o}_{t}\right)+\mathbf{b}_{1}\right]\). Besides, \(\mathbf{W}_{1}\in\mathbb{R}^{C}\times\mathbb{R}^{N}\), \(\mathbf{b}_{1}\in\mathbb{R}^{C}\), \(\mathbf{w}_{2}\in\mathbb{R}^{C}\), \(b_{2}\in\mathbb{R}\) are the generated hyperparameters indicating non-negative weights and biases from MLPs taking \(s_{t}\) as input while the superscript \(\top\) means the transpose. Besides, \(f(\cdot)\) denotes the nonlinear ELU function
and enables VMIX to produce a nonlinear total value function \(V_{\rm tot}\left(\cdot\right)\).
Thus via VMIX, the individual state values \(\mathbf{V}_{\rm agg}(\mathbf{o}_{t})\) with captured topological features from SGNN are further integrated into the total state value \(V_{\rm tot}(\mathbf{o}_{t})\) by nonlinear transformation, where the weights and biases learn the contribution of each agent under the guidance of the global state \(s_{t}\). Conversely, \(V^{(i)}(o_{t}^{(i)})\) is decomposed from \(V_{\rm tot}(\mathbf{o}_{t})\) via \(\mathbf{V}_{\rm agg}(\mathbf{o}_{t})\). Furthermore, through applying gradient descent and back propagation, the parameters of each critic will be updated and thus an appropriate \(V^{(i)}(\cdot)\) will be learned.
### _The Training of SVMIX_
The SVMIX network aims to learn a joint policy \(\mathbf{\pi}(\cdot|\mathbf{o}_{t};\theta)=\prod_{i=1}^{N}\pi^{(i)}\left(\cdot|o_{t}^{( i)};\theta^{(i)}\right)\) (i.e., the normal distribution \(\mathcal{N}(\mu_{i}(o_{t}^{(i)}),\sigma_{t}^{2}(o_{t}^{(i)}))\) for \(i=1,\cdots,N\)) and a total state value function \(V_{\rm tot}(\cdot;\phi,\eta,\omega)\), where \(\theta=\langle\theta^{(1)},\cdots,\theta^{(N)}\rangle\) and \(\phi=\langle\phi^{(1)},\cdots,\phi^{(N)}\rangle\) are the learnable parameters of actors and critics respectively. Besides, \(\eta\) and \(\omega\) indicate the parameters of SGNN and VMIX, respectively. According to \(\mathbf{\pi}(\cdot|\mathbf{o}_{t};\theta)\), the joint action \(\mathbf{a}_{t}\) is adopted. In particular, \(\mathbf{a}_{t}\) indicates the taken actions of vehicles at time \(t\) while \(\mathbf{\pi}(\mathbf{a}_{t}|\mathbf{o}_{t};\theta)\) and \(V_{\rm tot}(\mathbf{o}_{t};\phi,\eta,\omega)\) participates in the calculation of loss functions with the global reward \(r_{t}\). Inspired by [26], the loss functions are defined as
\[L(\theta)=-\mathbb{E}_{t}\left[\min\left(\rho_{t;\theta}A_{t; \phi,\eta,\omega},{\rm clip}\left(\rho_{t;\theta},\epsilon\right)A_{t;\phi, \eta,\omega}\right)\right], \tag{16}\] \[L(\phi,\eta,\omega)=\frac{1}{2}A_{t;\phi,\eta,\omega}^{2}, \tag{17}\]
where
\[\rho_{t;\theta}=\frac{\mathbf{\pi}(\mathbf{a}_{t}|\mathbf{o}_{t};\theta)}{\mathbf{\pi}(\mathbf{a} _{t}|\mathbf{o}_{t};\theta_{\rm old})} \tag{18}\]
and
\[A_{t;\phi,\eta,\omega}=\sum_{k=t}^{T}\gamma^{k-t}r_{k}-V_{\rm tot}(\mathbf{o}_{t}; \phi,\eta,\omega). \tag{19}\]
Here \(T\) is the length of the episode, the clip function \({\rm clip}(\cdot)\) removes the incentive for moving the ratio \(\rho_{t}\) outside of the interval \([1-\epsilon,1+\epsilon]\) and \(A_{t}\) is the advantage value that evaluates the current policy in the form of Monte-Carlo error. Notably, for training with batches that include time-related samples, (19) will further be slightly modified as
\[A_{j;\phi,\eta,\omega}=\sum_{k=j}^{|\mathcal{B}|}\gamma^{k-j}r_{k} +\gamma^{|\mathcal{B}|-j}V_{\rm tot}(\mathbf{o}_{|\mathcal{B}|+1}; \phi,\eta,\omega) \tag{20}\] \[-V_{\rm tot}(\mathbf{o}_{j};\phi,\eta,\omega),\]
where \(|\mathcal{B}|\) is the batch size and \(j=\{1,\cdots,|\mathcal{B}|\}\) is the index of samples. If the episode ends, (20) will degenerate into the form of (19) as the approximation of subsequent rewards is no longer needed. Finally, \(\theta\) and \(\phi,\eta,\omega\) will be updated through gradient descent to minimize (16) and (17) separately.
```
1:Initialize the underlying graph \(\mathcal{G}^{+}=\{\mathcal{V},\mathcal{E}^{+}\}\), the number of agents \(N\), the batch size \(N_{\rm batch}\), the length of episodes \(T\), the number of PPO epochs \(N_{\rm epoch}\), discount factor \(\gamma\) and constant \(\epsilon\);
2:Initialize the actor and critic networks in PPO agents, the SGNN network and the VMIX network with random parameters \(\theta\), \(\phi\), \(\eta\) and \(\omega\) respectively;
3:Initialize the batch \(\mathcal{B}\leftarrow\varnothing\) and the batch sample counter as counter\(\gets 0\);
4:for every episode do
5:Initialize the ending flag of the episode as done\(\gets 0\);
6:for\(t\gets 1\) to \(T\)dodo
7:Obtain the global state \(s_{t}\) and the joint observation \(\mathbf{o}_{t}=\{o_{t}^{(1)},\cdots,o_{t}^{(N)}\}\) from the environment;
8: Each agent generates the mean \(\mu_{i}(o_{t}^{(i)};\theta^{(i)})\) and the standard deviation \(\sigma_{i}(o_{t}^{(i)};\theta^{(i)})\) the normal distribution \(\mathcal{N}(\mu_{i}(o_{t}^{(i)};\theta^{(i)}),\sigma_{i}^{2}(o_{t}^{(i)}; \theta^{(i)}))\);
9: Each agent chooses the sampled action \(a_{t}^{(i)}\sim\mathcal{N}(\mu_{i}(o_{t}^{(i)};\theta^{(i)}),\sigma_{i}^{2}(o_{ t}^{(i)};\theta^{(i)}))\) ;
10: Obtain the reward \(r_{t}\), \(s_{t+1}\) and \(\mathbf{o}_{t+1}\) at the end of \(t\);
11: Store the tuple \(\langle s_{t},\mathbf{o}_{t},\mathbf{a}_{t},r_{t},s_{t+1},\mathbf{o}_{t+1}\rangle\) in \(\mathcal{B}\) in order of time and set counter\(\leftarrow\)counter\(+1\);
12:if\(t=T\)then
13:done\(\gets 1\);
14:endif
15:ifcounter\(=N_{\rm batch}\) or done\(=1\)then
16: Clone \(\theta_{\rm old}\leftarrow\theta\);
17:for\(n_{\rm epoch}\leftarrow\) 1 to \(N_{\rm epoch}\)dodo
18:for samples in \(\mathcal{B}\)do
19: Obtain \(\mathbf{\pi}(\mathbf{a}_{j}|\mathbf{o}_{j};\theta)\) and \(\mathbf{\pi}(\mathbf{a}_{j}|\mathbf{o}_{j};\theta_{\rm old})\) corresponding to the distributions from the actors and calculate \(\rho_{j;\theta}\) by (18);
20: Obtain the approximated total state value \(V_{\rm tot}(\mathbf{o}_{j};\phi,\eta,\omega)\) through the critic networks, SGNN and VMIX network sequentially in (8)-(15);
21: Calculate \(A_{j;\phi,\eta,\omega}\leftarrow\sum_{k=j}^{|\mathcal{B}|}\gamma^{k-j}r_{k}+(1- \text{done})*\gamma^{|\mathcal{B}|-j}V_{\rm tot}(\mathbf{o}_{|\mathcal{B}|+1}; \phi,\eta,\omega)-V_{\rm tot}(\mathbf{o}_{j};\phi,\eta,\omega)\) ;
22:endfor
23: Update \(\theta\) and \(\phi,\eta,\omega\) according to (16) and (17) separately via batch gradient descent;
24:endfor
25: Initialize \(\mathcal{B}\leftarrow\varnothing\) and counter\(\gets 0\);
26:endif
27:endfor
28:endfor
```
**Algorithm 1** The Training of SVMIX Algorithm
Substantially, SVMIX decomposes \(V^{(i)}(o_{t}^{(i)})\) from \(V_{\rm tot}(\mathbf{o}_{t})\) through gradient descent on the minimization of (17) via VMIX. In particular, with the aid of SGNN, a feasible mapping from \(\mathbf{V}(\mathbf{o}_{t})\) to \(V_{\rm agg}^{(i)}(\mathbf{o}_{t})\) which takes the dynamics of topology and the influence from other agents into account is learned first. Afterwards, a satisfied joint policy will be learned according to (16). Additionally, the RES model in SGNN enhances the anti-disturbance ability of SVMIX and the exploration ability by capturing the dynamic topological feature of \(\mathcal{G}_{t}\) through \(\mathcal{H}\) and \(p\). In other words, for all possible inputs \(\mathbf{V}(\mathbf{o}_{t})\) related to a specific topological connection, SGNN helps to finally get the
corresponding state value \(V^{(i)}(o_{t}^{(i)})\) for each agent and handles topological dynamics based on RES. To sum up, we describe the training procedure of SVMIX algorithm in Algorithm 1.
## V Experimental Settings and Numerical Results
In this section, we try to verify the performance of SVMIX in autonomous vehicle control and explain the advantage of our proposed algorithm over other MARL methods. We implement two simulation scenarios on Flow [44, 45], which is a traffic control benchmarking framework for mixed autonomy traffic. As illustrated in Fig. 4, the "Figure Eight" scenario and the "Merge" scenario are chosen as two typical cases to evaluate the performance of our method.
### _The Settings for "Figure Eight" and Simulation Results_
The "Figure Eight" scenario in Fig. 4 (a) consists of fixed \(M=14\) (i.e., \(M_{t}=14\) for any \(t\)) vehicles running circularly along a one-way lane with an intersection. So the vehicle must strive for a velocity close to its desired velocity, and timely adjust the velocity when passing through the intersection so as to avoid collisions. What's more, we deploy \(7\) manned vehicles and \(N=7\) DRL-driven vehicles alternately, where the former vehicles are controlled by Intelligent Driver Model (IDM) defined in [8] while the latter ones are controlled by MARL methods. Notably, all the vehicles will perform emergency braking if they are about to crash, and once a collision occurs the episode will be terminated immediately. The corresponding MDP for DRL agents in "Figure Eight" is defined as below.
State and ObservationHere the state contains the information of all the vehicles. Besides, each DRL-driven vehicle can only observe the information of the vehicles ahead and behind. Therefore we have \(s_{t}=\{v_{t}^{(1)},z_{t,t}^{(1)},v_{t}^{(2)},z_{t}^{(2)},\cdots,v_{t}^{(M)}, z_{t}^{(M)}\}\) and \(o_{t}^{(i)}=\{v_{t}^{(i_{\rm ahead,t})},z_{t}^{(i_{\rm ahead,t})},v_{t}^{(i)},z_{ t}^{(i_{\rm behind,t})},z_{t}^{(i_{\rm behind,t})}\}\) as the observation of DRL-driven vehicle \(i\), where \(i_{\rm ahead,t}\) and \(i_{\rm behind,t}\) are the preceding and following vehicles of \(i\) at time-step \(t\), respectively.
ActionAs each DRL-driven vehicle only needs to consider the acceleration (i.e. \(a^{(i)}=u^{(i)}\)), \(\mathbf{a}_{t}=\{u^{(1)},u^{(2)},\cdots,u^{(N)}\}\).
RewardThe reward function is the same as (3) where \(\alpha=0\) so the penalty term is ignored.
In our setting, the number of episodes \(N_{\rm episode}\) is \(300\), while each episode has a maximum of \(L=1500\) iterations in case of no collision. Also, we design the system utility \(U=\frac{1}{L}\sum_{t=1}^{T}r_{t}\) as a concrete form of (4) as the evaluation metrics.
For SVMIX, we use the same architecture as depicted in Fig. 2 with \(F=32\), \(K=3\) and \(p=0.7\). Besides, we compare the performance of SVMIX with other MARL methods, including Federated Independent Reinforcement Learning (FIRL) [27], QMIX [17] and Multi-Graph Attention Network (MGAN) [20], where FIRL combines federated learning with consensus algorithm based on multiple PPO agents, and MGAN adibits Graph AttenTion network (GAT) into VD-based MARL architecture. Both MGAN and SVMIX are contingent on a complete graph for inter-agent communication, while FIRL concentrates on a specified graph for consensus. Especially, FIRL uploads the gradient of each agent to a centralized virtual agent every \(\tau\) updates. Besides, an optimal baseline is rendered by Flow where all the \(14\) vehicles are controlled by IDM, as it is a typical car-following model that contains lots of prior knowledge. It's worth noting that all the methods deal with a global reward, while each agent in FIRL uses the local reward identical as the global reward. Typical parameter settings are summarized in Table I.
Fig. 5 compares the average system utility curves of different MARL methods and the optimal baseline in \(10\) independent simulations respectively, while Table II provides the corresponding means and standard deviations of each MARL method. In particular, the utility is computed by averaging the results of another five testing episodes after every 10
Fig. 4: The two scenarios for simulations. Here the red vehicles are the DRL-driven vehicles, the blue vehicles are the manned vehicles observed by the DRL-driven vehicles while the white vehicles are the manned vehicles which are not observed in the state space.
training episodes in a simulation, when each agent directly takes the mean \(\mu_{i}(o_{i}^{(i)})\) as a deterministic action. It can be observed from Fig. 5 that FIRL yields less utility with the relatively stable performance, while the QMIX achieves a higher utility than FIRL, since the VD learns different contributions of agents with a precise total value function but fluctuates drastically due to the occurrence of collisions. MGAN further improves the convergence speed and somewhat reduces the fluctuations thanks to the captured topological features by GAT. Comparatively, benefiting from the stochastic graph filters with enhanced capability to capture dynamic topological features, SVMIX outperforms the other algorithms as it converges faster with a smaller training variance while learns a safer joint policy with less fluctuations, as well as maintains a high utility.
In terms of the communication overhead, Table III compares SVMIX and FIRL, where both of the methods will be updated once with a batch containing \(N_{\mathrm{batch}}\) samples. Notably, the VD-based methods have almost the same communication overheads. Consequently, SVMIX reduces \(12.13\%\) of the communication overheads compared to FIRL as it only needs to transmit a batch of values rather than the gradients of all the parameters. The reason lies in that the communication overheads of FIRL are proportional to the number of parameters (i.e., \(N_{\mathrm{para}}\)) while the communication overheads of SVMIX are proportional to the number of batches (i.e., \(N_{\mathrm{batch}}\)) and the number of epochs (i.e., \(N_{\mathrm{epoch}}\)).
### _The Settings for "Merge" and Simulation Results_
As depicted in Fig. 4 (b), the "Merge" scenario simulates the on-ramp merging on the highway. Similar to "Figure Eight", it's essential for vehicles to consider how to avoid collisions and congestion due to the meeting at the merging point. As "Merge" is an unclosed network, there assumes to exist a traffic flow where the vehicles frequently flow in and out, leading to the variations in the number of vehicles. Following the definition of the model in Section III, the scenario is set to allow at most \(2,100\) vehicles to flow in per hour, including \(2,000\) vehicles at most on the trunk road and \(100\) vehicles at most on the ramp per hour. In particular, among the \(2,000\) vehicles, \(25\%\) of them will be assigned as DRL-driven vehicles. Consistent with the MDP defined in Section III, we consider an MDP as follows.
State and ObservationHere the state contains the information of the observable vehicles (just like the red and blue vehicles in Fig. 4 (b)) instead of all the vehicles. Also, each DRL-driven vehicle can only observe the information of the preceding and following vehicles. So \(s_{t}\) only consists of the positions and velocities of DRL-driven vehicles as well as the vehicles ahead of and behind them. Furthermore, we fix the number of algorithmically involved DRL-driven vehicles as \(N=13\) and if the practical number of vehicles \(N_{t}>N\), the other \(N_{t}-N\) vehicles will be treated as manned vehicles; otherwise, the state will be padded with
Fig. 5: The average system utility of different methods under the Scenario “Figure Eight”. Obviously all of the methods converge within about 150 episodes, where FIRL learns a stable policy at the cost of the convergence speed and utility. QMIX achieves a higher utility thanks to VD and MGAN further improves the convergence speed and reduces the fluctuation owing to the use of GAT. SVMIX outperforms the other methods in terms of the convergence speed and stability with a high utility.
zeros, as if there are \(N\) DRL-driven vehicles. The dimension of the state space consequently remain unchanged. Therefore the state is represented by \(s_{t}=\{o_{t}^{(1)},o_{t}^{(2)},\cdots,o_{t}^{(N)}\}\) where \(o_{t}^{(i)}=\{v_{t}^{(i_{\text{thanal},t})},z_{t}^{(i_{\text{hand},t})},v_{t}^{( i)},z_{t}^{(i_{\text{thanal},t})},v_{t}^{(i_{\text{thanal},t})},z_{t}^{(i_{\text{ thanal},t})}\}\) expresses the observation of DRL-driven vehicle \(i\) where \(i=1,2,\cdots,N\).
ActionSame as the above, the \(N=13\) DRL-driven vehicles will choose their actions according to their policies so the dimension of the joint action space is also immutable and \(\mathbf{a}_{t}=\{u^{(1)},u^{(2)},\cdots,u^{(N)}\}\). When \(N_{t}>N\), the first \(N\) vehicles will be treated as DRL-driven vehicles and the others are regarded as manned vehicles. In case \(N_{t}<N\), the extra actions will be ignored during the interaction with the environment. It's worth noting that the underlying graph \(\mathcal{G}^{+}\) is still a complete graph with fixed \(N\) vertices.
RewardThe reward function is the same as (3) where \(\alpha=0.1\) and the penalty term is defined as
\[D(z^{(1)},\cdots,z^{(M_{t})})=\sum_{j=1}^{M_{t}}\max[C_{2}-\|z^{(i_{\text{ thanal},t})}-z^{(i)}\|_{2},0], \tag{21}\]
where \(C_{2}\) is a constant which represents the desired following distance of each vehicle.
In our setting, the number of episodes \(N_{\text{episode}}\) is 300, while each episode has a maximum of \(L=750\) iterations in case of no collision. For SVMIX, we use the same architecture as in Fig. 2 with \(F=32\), \(K=3\) and \(p=0.7\). Besides, we compare the performance of SVMIX with FIRL, QMIX and MGAN with the same detailed setting described earlier in Section V-A. Typical parameter settings are summarized in Table IV.
Fig. 6 compares the average system utility curves of different MARL methods and the optimal baseline in \(5\) independent simulations respectively, while Table V provides the corresponding means and standard deviations of each MARL method. In particular, the utility is computed by averaging the results of another five testing episodes after every \(10\) training episodes in a simulation, when each agent directly takes the mean \(\mu_{i}(o_{t}^{(i)})\) as a deterministic action. Notably, MGAN achieves an unstable and the worst performance shown in Fig. 6(b), and even worse than QMIX as well as FIRL. Especially at the last \(150\) episodes, the curve of MGAN fluctuates violently. This is possibly because the DRL-driven vehicles are irregularly allocated due to the traffic flow and it becomes a degenerative feedback to the learning of attention coefficients. Both QMIX and SVMIX generates a relatively stable curve with higher average utilities than FIRL and MGAN. What's more, benefiting from the stochastic graph filters, SVMIX outperforms QMIX with higher utility and lower variance according to Table V. It is noteworthy that the performance of SVMIX obviously outperforms MGAN in "Merge", which verifies the aforementioned idea that it's necessary to deal with the dynamic topology if we want to capture topological features. Compared with FIRL, SVMIX has similar means of average utilities and seems more unstable as the variances are larger in the scenario. More meaningfully, SVMIX still superiors in terms of communication overheads.
### _Hyperparameters Adjusting for Sgnn in SVMIX_
To clarify the influence of the hyperparameters in SGNN, we further carry out supplementary experiments in "Figure Eight" by changing one hyperparameter and keeping the others the identical. Fig. 7 provides the corresponding results where we get the results from the testing episodes after
Fig. 6: The average system utility of different methods under the Scenario “Merge”. All the methods converge within about \(100\) episodes. FIRL still learns a stable policy and reaches a not bad average utility. MGAN converges quickly but seems unstable in terms of the undulant and low average utility. QMIX and SVMIX both hold a more stable performance, where the average utility of SVMIX is slightly better than QMIX.
\(100,110,\cdots,200\) training episodes (i.e., when SVMIX converges slowly) respectively through \(5\) independent simulations.
Probability \(p\)As shown in Fig. 7(a), keeping \(K=3\) and \(F=32\), we evaluate the performance of SVMIX for \(p\in\{0.1,0.3,0.5,0.7,0.9\}\). For a Bernoulli-sampled RES, the variance of the outputs from SGNN is proportional to \(p(1-p)\) and will be maximized when \(p=0.5\). Obviously, the result of \(p=0.5\) is the most unstable as there are many outliers. The settings of \(p=0.3\) and \(p=0.7\) yield the similar stable performance, while the performance of \(p=0.1\) and \(p=0.9\) seem more unstable. This is because when \(p=0.1\) and \(p=0.9\), SGNN is unable to make sufficient exploration due to the too small or too large probability \(p\) based on RES. Thus a probability like \(p=0.7\) sounds more appropriate for SGNN.
Order of the filter \(K\)Fig. 7(b) demonstrates the influence of the filter order \(K\) by retaining \(p=0.7\) and \(F=32\) with \(K\) spanning from \(2\) to \(6\). Consistent with the analysis in [25], which states that the variance of the outputs from SGNN is proportional to \(K\), the variance of the system utility also becomes greater as \(K\) grows. Thus an order like \(K=2\) or \(K=3\) will be suitable for "Figure Eight".
Number of filters \(F\)Fig. 7(c) demonstrates the influence of the number of filters \(F\) by retaining \(p=0.7\) and \(K=3\) with \(F\in\{16,32,48,64,80\}\). For \(F=16\), the capability of SGNN for feature capturing decreases and thus leads to an unstable performance. As \(F\) increases, SVMIX generally gives better performance, though it slightly suffers from the deficiency of training samples to train a larger neural network.
## VI Conclusion and Discussion
This paper aims to address reward assignment problem in a dynamic environment by Value Decomposition and tentatively puts forward an SGNN based multi-agent actor-critic architecture called SVMIX with PPO as the individual agent. In particular, SGNN, which consists of parallel stochastic graph filters, has been leveraged to enhance the resilience to the environment volatility and thus capably capture dynamic underlying features in a more effective manner. To demonstrate the feasibility of SVMIX, we further explain why SGNN works by clarifying the influence of SGNN on exploring the comprehensive optimal mapping from individual state values to the total state value through theoretical analysis. Moreover, through extensive simulations in two different scenarios, SVMIX manifests itself with a superior capability in terms of the performance like the convergence rate, the mean & the variance of the average system utility and communication overheads.
However, the assumption that each agent can only make decisions based on the observation of itself might be over-emphasized as the communication between agents close to each other is allowed in practical scenarios in the decentralized execution. Moreover, a reward-decomposition method like [46] is able to sufficiently utilize the result of the centralized training during decentralized execution and even conduct decentralized training for fine tuning. In the future, it is meaningful to apply VD on the global reward in order to learn the individual reward functions and incorporate the inter-agent communication more comprehensively.
Fig. 7: The average utility of SVMIX with different structure of SGNN under the corresponding hyperparameters by changing (a) the sampling probability \(p\), (b) the order of each filter \(K\) and (c) the number of filters \(F\) from the testing episodes after \(100,110,\cdots,200\) training episodes through \(5\) independent simulations. |
2303.12643 | Traffic Volume Prediction using Memory-Based Recurrent Neural Networks:
A comparative analysis of LSTM and GRU | Predicting traffic volume in real-time can improve both traffic flow and road
safety. A precise traffic volume forecast helps alert drivers to the flow of
traffic along their preferred routes, preventing potential deadlock situations.
Existing parametric models cannot reliably forecast traffic volume in dynamic
and complex traffic conditions. Therefore, in order to evaluate and forecast
the traffic volume for every given time step in a real-time manner, we develop
non-linear memory-based deep neural network models. Our extensive experiments
run on the Metro Interstate Traffic Volume dataset demonstrate the
effectiveness of the proposed models in predicting traffic volume in highly
dynamic and heterogeneous traffic environments. | Lokesh Chandra Das | 2023-03-22T15:25:07Z | http://arxiv.org/abs/2303.12643v1 | Traffic Volume Prediction using Memory-Based Recurrent Neural Networks: A comparative analysis of LSTM and GRU
###### Abstract
Predicting traffic volume in real-time can improve both traffic flow and road safety. A precise traffic volume forecast helps alert drivers to the flow of traffic along their preferred routes, preventing potential deadlock situations. Existing parametric models cannot reliably forecast traffic volume in dynamic and complex traffic conditions. Therefore, in order to evaluate and forecast the traffic volume for every given time step in a real-time manner, we develop non-linear memory-based deep neural network models. Our extensive experiments run on the Metro Interstate Traffic Volume dataset demonstrate the effectiveness of the proposed models in predicting traffic volume in highly dynamic and heterogeneous traffic environments.
## I Introduction
Rapid socioeconomic development aids in the growth of large-scale, expanding smart cities with easy access to communication technologies. Modern mobile and vehicular communication technology promotes Intelligent Transportation Systems (ITS), resulting in an exponential increase in the number of vehicles on the road each year. Traffic congestion has become one of the most pressing issues to address[1] and it is creating deadlock situations for large cities as well as for medium and small cities[2]. Traffic congestion can be mitigated using traditional methods by changing the road and urban infrastructures. However, redesigning the city structure to improve the traffic pattern and reduce congestion is very expensive, let alone time-consuming. Therefore, dynamic route planning, optimizing road allocations, managing traffic on urban roads, and using modern technologies to understand traffic patterns better are crucial tasks to reduce traffic congestion successfully. Forecasting the future traffic status based on past historical data is a way of reducing traffic congestion[3]. An accurate traffic volume prediction can alleviate traffic congestion and optimize traffic distributions.
Different techniques for traffic flow prediction have been studied lately. These methods can be generally classified as naive, parametric, and non-parametric. The naive model does not make any assumptions, and it is computationally fast. However, it has low accuracy. The parametric method includes different time-series methods. One of the most widely used parametric methods is the ARIMA (autoregressive integrated moving average) model[4]. The ARIMA model is practically effective in predicting traffic flow and has been a benchmark. However, the parametric approaches can achieve better performances if the time-series data shows a regular pattern. In particular, the parametric methods fail to perform better when the traffic pattern varies in nature. Non-parametric methods address the issue of parametric methods, and various non-parametric methods such as non-parametric regression, support vector machine, Kalman filtering, and neural network predictors are being used in traffic volume predictions. Deep neural networks have been shown to be superior at dealing with traffic forecasting. Recurrent neural networks (RNN), especially long-short Term memory (LSTM), demonstrated their advantages in modeling and predicting traffic flow.
In this project, we develop traffic volume prediction using long-short term memory (LSTM) and gated recurrent units (GRU) and evaluate our model on Metro inter-state traffic dataset[5].
## II Related Work
Traffic forecasting is critically important, and it has been extensively studied lately. Traffic prediction methods can be divided into parametric and non-parametric methods. Previously, researchers have applied various parametric methods to predict traffic flow. The autoregressive integrated moving average is one of the most widely used parametric models in time-series prediction datasets. Chen et al.[6] predicted traffic flow using the autoregressive integrated moving average (ARIMA) model. However, other parametric models have also been used by researchers. Kumar et al.[7] developed Kalman filtering techniques to forecast the traffic flow. Dong et al.[8] applied the gradient-boosting decision tree algorithm to perform short-term traffic flow prediction.
Deep learning-based approaches have recently received a lot of attention from researchers, and they provide a more accurate estimation of traffic volume prediction than traditional parametric methods. The long-short-term memory (LSTM) and gated recurrent unit (GRU) is capable of holding a long sequence of past observations and making a correlation in sequence prediction, which makes them more suitable for time-series datasets. Many researchers are using LSTM and GRU for predicting traffic conditions. Zhao et al.[9] used LSTM to predict short-time traffic. Fu et al.[10] used LSTM and GRU for traffic flow prediction. Some researchers use a graph attention network to predict the traffic pattern[11]. In this project, we use LSTM and GRU to predict future traffic flow and evaluate the model using a metro interstate traffic volume dataset.
### _Problem Formulation_
We consider the real-time traffic volume prediction as a multi-variate time-series problem where our model will approximately estimate future traffic flow based on the current and \(t-\)hours of historical observations. Specifically, our objective is to foretell future traffic volume at time step \(T_{t+1},T_{t+2},...T_{t+f}\), where \(f\) is the future prediction horizon using past observations from time steps \(T_{t-l},T_{t-l-1},...,T_{t-1},T_{t}\) where \(l\) is the length of past observations used to predict the future traffic flow.
## III Method
We use memory-based recurrent neural network models, e.g., long-short-term memory (LSTM) and gated recurrent units (GRU), to predict future traffic flow.
### _Long-Short Term Memory (LSTM)_
Long-Short-term Memory networks (LSTMs)[12] are designed to learn long-term dependencies and effectively deal with the vanishing gradient problem of recurrent neural networks(RNNs)[13, 14]. Because it can hold long sequences while predicting the current output, LSTM is well-suited for applications that make predictions based on time-series data. The memory block of the LSTM cell makes it easy to hold the sequence information. The memory block has memory cells and three gates: the forget gate, the input gate, and the output gate. The basic structure of an LSTM cell is depicted in Fig.1.
The forget gate determines which information should propagate for the next time sequence through the sigmoid activation function. The input gate decides which information is necessary for the current state, and the output gate regulates what to output to the next state, e.g., the current output and the value of the next hidden state \(h_{t}\). The equations to update the gates are as follows:
\[f_{t} =\sigma(W_{f}.[h_{t-1},x_{t}]+b_{f})\] \[i_{t} =\sigma(W_{i}.[h_{t-1},x_{t}]+b_{i})\] \[\tilde{C}_{t} =\tanh(W_{C}.[h_{t-1},x_{t}]+b_{C})\] \[C_{t} =f_{t}*C_{t-1}+i_{t}*\tilde{C}_{t}\] \[o_{t} =\sigma(W_{o}.[h_{t-1},x_{t}]+b_{o})\] \[h_{t} =o_{t}*\tanh(C_{t})\]
where \(f_{t}\) is the forget gate, \(i_{t}\) is the input gate, and \(o_{t}\) is the output gate, respectively; \(C_{t}\) is a memory cell to hold the sequences from the previous states, and \(h_{t}\) is the output for the next state; \(W^{*}\) is weight, and \(b^{*}\) is bias.
### _Gated Recurrent Unit (GRU)_
Gated Recurrent Unit (GRU) replaces the three gates of LSTM with two gates: the reset gate and the update gate. These gates use sigmoid activation functions linked by LSTMs, constraining their values between \(0\) and \(1\). Intuitively, the reset gate controls how much of the previous state we might still want to remember, and an update gate would allow us to control how much of the new state is just a copy of the old state. Fig. 2 illustrates the inputs for both the reset and update gates in a GRU, given the input of the current time step and the hidden state of the previous time step. The mathematical mechanism for the GRU is as follows:
\[r_{t} =\sigma(W_{r}.[h_{t-1},x_{t}]+b_{r})\] \[z_{t} =\sigma(W_{z}.[h_{t-1},x_{t}]+b_{z})\] \[\tilde{H}_{t} =\tanh(W_{h}.[(h_{t-1}\odot r_{t}),x_{t}]+b_{h})\] \[h_{t} =z_{t}\odot h_{t-1}+(1-z_{t})\odot\tilde{H}_{t}\]
where \(r_{t}\) is the reset gate, \(z_{t}\) is the update gate, and \(h_{t}\) is the current time step's output. The rest of the symbols contain the same meaning as described in the LSTM section.
### _Dataset_
We use Metro Interstate Traffic Volume Data Set[5]. This is a multivariate, sequential, time-series dataset collected from the westbound I-94 at Minneapolis-St. Paul, Minnesota. The dataset is relatively large and has 48204 instances with nine features. The dataset collected traffic volume data in an hourly manner from 2012 to 2018, taking into account weather features and holidays that impact the traffic volume.
### _Data Preprocessing_
We cannot directly feed data into the LSTM or GRU models, unlike other deep learning models such as CNN and RNN. We have to convert it into a specific format. Input format should have at least time steps and a number
Fig. 1: A structure of an LSTM cell
Fig. 2: A structure of a GRU cell
of features. Generally, in time series prediction, we use \(t\)-hours of observations as input to the network, and the model will produce output at \(t+1\) hour. Here, we use the past \(t\)-hours of data to predict the next \(n\) hours' traffic volume. The dataset also contains some categorical values that needed to be converted into numerical values. Moreover, the attributes are in different scales. The statistics of the dataset are shown in Fig.3. It is clear that traffic_volume attribute values are very larger compared to rain_1h values. So we need to scale them to reduce the bias. We use the MinMaxScaler technique to normalize the feature values between 0 and 1. We split the dataset into training and testing sets. We use data from the years 2012-2017 for training among which 20% was used for validation purposes and the last year's data was used for testing purposes. From the extensive data analysis, we found
## IV Experimental Setup
We run a total of 4 experiments with different settings, taking 6 to 24 hours of past observations to predict the next hour's traffic volume. We also did an experiment to see how the features impacted traffic volume prediction. In one set of experiments, we select all features from the dataset, and in another set of experiments, we only consider four features, namely _temperature_, _rain_1h_, _clouds_all_, and _traffic volume_. We run experiments on two sets of neural networks. The table I shows the hyperparameters for our deep learning model. For experiment 2, we just changed the hidden number of units as like \([256,128,64,32]\) and the number of epochs is \(500\). The rest of the hyperparameters are the same as those shown in the table. The training is stopped if the validation error does not improve for at least five consecutive runs. The main evaluation metrics are mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) as shown in equations 1, 2, and 3.
\[MSE(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2} \tag{1}\]
\[MAE(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}| \tag{2}\]
\[MAPE(y,\hat{y})=\frac{1}{n}\sum_{i=1}^{n}\frac{|y_{i}-\hat{y}_{i}|}{\text{max }(\epsilon,|y_{i}|)} \tag{3}\]
where \(y\) is actual traffic volume, \(y_{i}\) is the predicted traffic volume and \(\epsilon\ll 1\) used to avoid any undefined results caused by if \(|y_{i}|\) is zero.
## V Results
We implemented the traffic volume prediction problem in Python based on Keras and Tensorflow. To train and test the proposed model, a workstation equipped with an Intel Xeon Gold 5222 processor, an NVIDIA(r) RTXTM A4000 graphics card, and 48GB of RAM running Windows 11 OS is used. Empirically, the autoregressive integrated moving average (ARIMA) is not suitable when the dataset is more complex and has long sequences. So, we directly choose LSTM and GRU, and ARIMA is out of the scope of the project. In our
Fig. 4: Features contains outliers
Fig. 5: Features after removing outliers
Fig. 3: Statistics of the dataset
experiment, we compare the LSTM and GRU performances by varying the neural network settings and also by varying the length of historical observations. We compare the mean squared error (MSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) for both GRU and LSTM with different neural network settings, taking into account all features vs. a reduced number of features.
Fig. 6 shows the comparative results of two models (e.g., LSTM and GRU) for different feature sets and neural network settings when the length of past observation is set to 24 hours. Although there is no significant pattern in the results, it can be seen that both the GRU and LSTM models produce better results in terms of MSE and MAE. On the other hand, when training does not require preserving a very long history, GRU performs better than LSTM. Fig.7 shows the results when the length of the past observation is set to 6 hours. It can be observed that when the past observation history is very limited, GRU performs better compared with the LSTM in all three evaluation metrics for both neural network settings. Fig.8 and Fig.9 depict the predicted traffic volume results at various times. Here, the blue line indicates the actual traffic volume at that particular hour and the orange line represents the predicted traffic volume for that particular hour using the proposed models. The results clearly indicate that the LSTM and GRU can potentially provide accurate traffic volume as close to the actual values as possible. However, GRU outperforms LSTM when the past observation sequences are small, and LSTM performs better when the dataset is complex and requires the use of very long sequences to predict future traffic volume. Fig.10 shows the convergence of the LSTM and GRU models. Both models take more training time to get convergent when the number of features is less. However, GRU takes more time compared to LSTM. The reason may be that GRU does not use any memory units to control the flow of information.
## VI Discussion
In this section, I will discuss challenges, limitations, computational efficiency, and potentially other methods for predicting traffic volume. The Traffic Volume Prediction repository contains the implementation, dataset, and trained models. Instructions on how to run can be found in the README.md file.
### _Challenges and Limitations_
Preparing the dataset to feed into the LSTM and GRU networks was one of the more challenging tasks as it added another dimension (e.g., time) to the input shape. Moreover, to find a proper set of hyperparameters, the network required a significant amount of time, which was very challenging given the limited amount of computational resources. Finally, we have used MinMaxScaler to normalize the dataset. Whenever we compared the results with ground truth, we had to convert them back to their original format, which also requires a lot of work.
### _Computational Time and Memory_
LSTM and GRU are computationally expensive and require a lot of memory, as they use memory to store historical observations internally. To finish a single training epoch given the above network structures and hyperparameters settings, the LSTM and GRU took approximately \(25\sim 28s\).
## VII Conclusion
In this project, we develop memory-based deep recurrent neural network models, namely LSTM and GRU, to predict traffic volume. The models are evaluated on the widely used
Fig. 6: MSE, MAE, and MAPE errors of LSTM and GRU by varying neural network settings. The network uses 24 hours of observations to predict the next hour’s traffic volume.
Fig. 7: MSE, MAE, and MAPE errors of LSTM and GRU by varying neural network settings. The network uses 6 hours of observations to predict the next hour’s traffic volume.
[MISSING_PAGE_POST]
Metro Interstate Traffic Volume dataset. The experimental results demonstrate the effectiveness of the proposed LSTM and GRU models. Recently, many researchers used graph-based attention modules to predict time series data. Graph-based multi-attention networks could be applied to predict traffic volume as well. To obtain better predictions, however, complete hyperparameter tuning and extensive experiments are required, which are our future works.
|
2308.05059 | A Novel Method for improving accuracy in neural network by reinstating
traditional back propagation technique | Deep learning has revolutionized industries like computer vision, natural
language processing, and speech recognition. However, back propagation, the
main method for training deep neural networks, faces challenges like
computational overhead and vanishing gradients. In this paper, we propose a
novel instant parameter update methodology that eliminates the need for
computing gradients at each layer. Our approach accelerates learning, avoids
the vanishing gradient problem, and outperforms state-of-the-art methods on
benchmark data sets. This research presents a promising direction for efficient
and effective deep neural network training. | Gokulprasath R | 2023-08-09T16:41:00Z | http://arxiv.org/abs/2308.05059v1 | A Novel Method for Improving Accuracy in Neural Network by Reinstating Traditional Back Propagation Technique
###### Abstract
Deep learning has revolutionized industries like computer vision, natural language processing, and speech recognition. However, back propagation, the main method for training deep neural networks, faces challenges like computational overhead and vanishing gradients. In this paper, we propose a novel instant parameter update methodology that eliminates the need for computing gradients at each layer. Our approach accelerates learning, avoids the vanishing gradient problem, and outperforms state-of-the-art methods on benchmark data sets. This research presents a promising direction for efficient and effective deep neural network training.
## 1 Introduction
Deep learning has revolutionized the field of artificial intelligence by enabling machines to learn complex patterns and perform tasks that were previously deemed impossible. However, training deep neural networks is a challenging and computationally expensive task that requires optimizing millions or even billions of parameters. The back propagation algorithm has been the go-to method for training [5] deep neural networks for decades, but it suffers from some limitations, such as slow convergence and the vanishing gradient problem. To overcome these limitations, several alternative training methods have been proposed, such as Standard Back propagation and Direct Feedback Alignment. The core idea of this approach is to update the weights and biases in each layer of a neural network using the local error at that layer, rather than back propagating the error from the output layer to the input layer.[2] By doing so, the training process can be accelerated and the model's accuracy can be improved.
In this paper, we propose a novel approach for layer-wise error calculation and parameter update in neural networks and evaluate its performance on a benchmark data set. We compare our approach to other existing methods, such as back propagation, and demonstrate its effectiveness in terms of convergence speed and accuracy. The rest of the paper is organized as follows.[4] In Section 2, we provide a brief review of the related work on layer-wise error calculation and parameter update in neural networks. In Section 3, we present our proposed approach and its mathematical formulation. In Section 4, we describe the experimental setup and present the results of our evaluation. Finally, in Section 5, we conclude the paper and discuss potential avenues for future research.
## 2 Literature Review
Back propagation and Direct Feedback Alignment (DFA) are two widely used methods for training artificial neural networks. Both methods aim to adjust the weights of the network to minimize the difference between the predicted output and the actual output. In this literature review, we will compare and contrast these two methods, highlighting their strengths and weaknesses.[6] Backpropagation is a commonly used algorithm for training neural networks. The
basic idea behind back propagation is to calculate the error at the output layer and propagate it backward through the network, adjusting the weights of each neuron in the network to minimize the error. The back propagation algorithm is computationally efficient and can be used to train deep neural networks with many layers. However, back propagation has several limitations. One of the main limitations is that it requires the computation of the derivative of the activation function at each layer, which can be computationally expensive. Additionally, back propagation is prone to getting stuck in local minima, which can lead to sub optimal solutions.
Direct Feedback Alignment (DFA) is a newer method for training neural networks that does not rely on back propagation. Instead of using the gradient of the loss function to update the weights, DFA uses a fixed random matrix to propagate the error from the output layer back to the hidden layers.[1] This random matrix is learned during the training process and is used to update the weights of the hidden layers. DFA has several advantages over back propagation. One of the main advantages is that it is less computationally expensive, as it does not require the computation of the derivative of the activation function at each layer. Additionally, DFA is less prone to getting stuck in local minima than back propagation. Several studies have compared the performance of back propagation and DFA on various tasks. A study by [3] found that DFA performed as well as back propagation on several benchmark data sets, including MNIST and CIFAR-10
## 3 Proposed Approach
The approach involves four main steps: forward pass, error calculation, parameter update, and repetition. The forward pass computes the outputs of each layer using the current weights and biases and passing them into the activation function. The error calculation step computes the error at each layer using a layer-wise loss function that takes into account the local deviation between the predicted and target values of that layer. The parameter update step updates the weights and biases of each layer using the calculated error and a layer-wise learning rate that controls the magnitude of the update. Finally, the repetition step repeats the first three steps for multiple epochs or until convergence.
The proposed approach differs from the traditional backpropagation algorithm, which calculates the error at the output layer and backpropagates it to update the parameters of all the layers. The layer wise approach allows for more localized and efficient updates that can potentially accelerate the training process and avoid the vanishing and exploding gradient problem. However, it requires careful tuning of the layer-wise loss function and learning rate, as well as a suitable initialization of the parameters.
Figure 1: Neural Network Life cycle
### Forward Pass
Compute the activations of each layer using the current weights and biases, starting from the input layer and propagating forward to the output layer.
**Algorithm:**
1. Initialize the input activation as the input data: \(h_{0}=x\)
2. For each layer \(I\) in the neural network: 1. Calculate the pre-activation value: \(z_{I}=W_{I-1}h_{I-1}+b_{I}\) 2. Calculate the activation value using an activation function: \(h_{I}=f(z_{I})\), where \(f\) is a non-linear function such as ReLU, sigmoid, or tanh.
3. Output the activations for each layer: \(h_{I}\)
The pre-activation value \(z_{l}\) is the weighted sum of the activations from the previous layer \(h_{l-1}\), plus the bias term \(b_{I}\). The activation function \(f\) transforms the pre-activation value \(z_{l}\) into the activation value \(h_{I}\), which is then passed on to the next layer. The choice of activation function depends on the specific task and architecture of the neural network, but common choices include ReLU, sigmoid, and tanh. The formulas for calculating the pre-activation value and activation value are:
\[\text{Pre-activation value: }z_{I}=W_{I}h_{I-1}+b_{I}\] \[\text{Activation value: }h_{I}=f(z_{I})\]
where \(W_{I}\) is the weight matrix for layer \(I\), \(b_{I}\) is the bias vector for layer \(I\), \(h_{I-1}\) is the activation vector from the previous layer, and \(f\) is the activation function.
### Error Calculation
Compute the error at each layer using a layer-wise loss function that takes into account the local deviation between the predicted and target values of that layer. The layer-wise loss function can be defined based on the specific task and architecture of the neural network, but it should capture the local errors that are relevant for updating the parameters of that layer.
**Algorithm:**
1. Initialize an empty list to store the error of each layer.
2. For each layer \(L\) in the neural network, do the following: 1. Compute the predicted values of the layer \(L\) using its activations and the current weights and biases: \(Z_{L}=W_{L}\cdot A_{L-1}+b_{L}=activation\_function(Z_{L})\) 2. Compute the target values of layer \(L\): \(T_{L}=targets\) of layer \(L\) 3. Compute the layer-wise loss function for layer \(L\): \(loss_{L}=layerwise\_loss\_function(A_{L},T_{L})\)
Figure 2: Forward Propagation
* Compute the error of layer \(L\): \(\delta_{L}=derivative(i.e.)Z_{L}\cdot(1-Z_{L})\). \(\delta_{errorL}=\delta_{L}\cdot transpose(W_{L+1})\)
* Add \(\delta_{errorL}\) to the list of errors.
* Return the list of errors.
In the above algorithm, \(W_{L}\) and \(b_{L}\) represent the weights and biases of layer \(L\), \(A_{L-1}\) and \(A_{L}\) represent the activations of the previous layer and the current layer, respectively. \(Z_{L}\) is the weighted input to layer \(L\), \(T_{L}\) is the target values for layer \(L\), and \(\delta_{L}\) is the error signal for layer \(L\). The activation function and its derivative are denoted as \(activation\_function\) and \(derivative\), respectively, and the layer-wise loss function and its derivative are denoted as \(layerwise\_loss\_function\) and \(derivative\), respectively. The \(transpose(W_{L+1})\) term in step 2d represents the transpose of the weights connecting layer \(L+1\) to layer \(L\).
### Parameter Update
Update the weights and biases of each layer using the calculated error and a layer-wise learning rate that controls the magnitude of the update. The update rule can be based on gradient descent or another optimization algorithm that can handle non-convex and high-dimensional spaces.
**Algorithm**
1. Initialize the weights and biases of each layer randomly or using a pre-defined scheme.
2. Set the learning rate, \(\alpha\), which controls the step size of the parameter update.
3. For each layer \(I\), compute the gradients of the layerwise loss function with respect to the weights and biases, denoted by \(\frac{\partial L}{\partial w(I)}\) and \(\frac{\partial L}{\partial b(I)}\), respectively, using the error calculated in the previous stage.
4. Update the weights and biases of each layer using the gradients and the learning rate as follows: 1. Weight Update: \(w(I)=w(I)-\alpha\cdot\frac{\partial L}{\partial w(I)}\) 2. Bias Update: \(b(I)=b(I)-\alpha\cdot\frac{\partial L}{\partial b(I)}\) where \(w(I)\) and \(b(I)\) are the weights and biases of layer \(I\), respectively.
5. Repeat steps 3-4 for all layers in the neural network.
6. Repeat steps 1-5 for multiple epochs or until convergence, where the convergence criterion can be based on a validation set or other metrics that capture the generalization ability of the model.
7. The weight and bias updates are computed using the gradients of the layer-wise loss function with respect to the weights and biases, respectively. These gradients can be computed using the error calculated in the previous stage and the chain rule of calculus. For example, the weight update for a fully connected layer can be computed as \[w(I)=w(I)-\alpha\cdot\frac{\partial L}{\partial w(I)}\] where \(w(I)\) is the weight matrix of layer \(I\), \(\alpha\) is the learning rate, and \(\frac{\partial L}{\partial w(I)}\) is the gradient of the layerwise loss function with respect to \(w(I)\), which can be computed as \[\frac{\partial L}{\partial w(I)}=\frac{\partial L}{\partial a(I)}\cdot\frac{ \partial a(I)}{\partial w(I)}\] where \(\frac{\partial L}{\partial a(I)}\) is the gradient of the layerwise loss function with respect to the activation of layer \(I\), and \(\frac{\partial a(I)}{\partial w(I)}\) is the gradient of the activation of layer \(I\) with respect to the weights of the layer \(I\). The bias update can be computed similarly using the gradient of the layer-wise loss function with respect to the biases.
### Repeat
Repeat steps 1-3 for multiple epochs or until convergence, where the convergence criterion can be based on a validation set or other metrics that capture the generalization ability of the model. The repeat stage of the proposed methodology involves iterating through the forward pass, error calculation, and parameter update steps for multiple epochs or until convergence. The structured algorithm for the repeat stage is as follows:
**Repeat until convergence or a maximum number of epochs:**
1. Perform a forward pass through the network to compute the activations of each layer using the current weights and biases.
2. Compute the error at each layer using the layer-wise loss function based on the local deviation between the predicted and target values of that layer.
3. Update the weights and biases of each layer using the calculated error and the layer-wise learning rate according to the update rule. The update rule can be based on gradient descent or another optimization algorithm that can handle non-convex and high-dimensional spaces.
4. Evaluate the performance of the model on a validation set or other metrics that capture the generalization ability of the model.
5. If the performance has improved, save the current set of weights and biases as the best model so far.
6. If the convergence criterion is met (e.g., the validation error has stopped decreasing), terminate the training and return the best model. Otherwise, continue to the next epoch.
The formulas for the parameter update stage can be based on various optimization algorithms, such as stochastic gradient descent (SGD) or Adam. For example, the SGD update rule for the weights and biases of a single layer can be expressed as:
\[W_{t+1} =W_{t}-\eta\cdot\frac{\partial L}{\partial W_{t}}\] \[b_{t+1} =b_{t}-\eta\cdot\frac{\partial L}{\partial b_{t}}\]
where \(W_{t}\) and \(b_{t}\) are the current weights and biases, \(\frac{\partial L}{\partial W_{t}}\) and \(\frac{\partial L}{\partial b_{t}}\) are the gradients of the layer-wise loss function with respect to the weights and biases, and \(\eta\) is the layer-wise learning rate that controls the magnitude of the update. The update rule can also include momentum or regularization terms to improve the stability and generalization of the model.
## 4 Experiment
To evaluate the effectiveness of the proposed methodology, we conducted experiments on two benchmark datasets: MNIST and CIFAR-10. For each dataset, we compared our approach with two baseline methods: standard backpropagation and direct feedback alignment We implemented the proposed approach and baseline methods using Python and TensorFlow
### Dataset preparation
The MNIST and CIFAR-10 datasets are widely used benchmark datasets in the field of computer vision. These datasets contain images of handwritten digits (in the case of MNIST) and objects (in the case of CIFAR-10) and are used to train and evaluate machine learning models for image classification tasks.Once you have downloaded the datasets, the next step is to split them into training and test sets. This is done to evaluate the performance of the model on unseen data. Typically, a split of 80% training and 20% test is used. This means that 80% of the data is used for training the model, and 20% is used for evaluating its performance.
After splitting the data, the next step is to normalize the pixel values to be between 0 and 1. Normalization is a technique used to rescale the values of input features to fall within a smaller and consistent range. In the case of image datasets, normalization is typically done to ensure that all pixel values are in the same range, which helps the machine learning model to learn more effectively.The pixel values in the MNIST and CIFAR-10 datasets are typically in the range of 0 to 255, where 0 represents the minimum intensity (black) and 255 represents the maximum intensity (white). To normalize the pixel values to be between 0 and 1, we divide each pixel value by the maximum pixel value in the dataset, which is 255.
\[\text{normalized}=\frac{\text{pixel value}}{255}\]
The resulting normalized pixel values will fall within the range of 0 to 1. Normalization is an important step in image processing and computer vision tasks because it helps to make the data more consistent and easier to work with. Normalization can also help to prevent certain issues that can arise during training, such as vanishing gradients, where the gradients become very small and hinder the training process.
### Model Architecture
Convolutional Neural Networks (CNNs) have been widely used in computer vision tasks, particularly in image classification tasks. In this context, CNNs work by using a series of layers to extract features from images, which are then used to make predictions about their class labels. In this research paper, we propose using a CNN architecture with the following layers: a convolution layer with 32 filters, a kernel size of 3x3 pixels, and ReLU activation, followed by a second convolution layer with 64 filters, a kernel size of 3x3 pixels, and ReLU activation. These two convolutional layers are followed by a max pooling layer with a pool size of 2x2 pixels. The output of the pooling layer is then flattened and fed into a dense layer with 512 units and ReLU activation. Finally, an output layer with 10 units and softmax activation is used to produce a probability distribution over the possible class labels.
The first convolutional layer applies 32 filters to the input image, which helps the network to learn low-level features such as edges and corners. The second convolutional layer applies 64 filters to the output of the first layer, which enables the network to learn more complex and high-level features. The ReLU activation function is used in both convolutional layers to introduce non-linearity and improve the model's ability to learn complex features. After the convolutional layers, a max pooling layer is used to downsample the feature maps produced by the convolutional layers. This reduces the spatial dimension of the feature maps, which helps to reduce the number of parameters and makes the model more efficient. The flattened output of the max pooling layer is then fed into a dense layer with 512 units and ReLU activation, which helps the model to learn even more complex patterns in the feature maps. Finally, an output layer with 10 units and softmax activation is used to produce a probability distribution over the possible class labels. The softmax activation function is used to ensure that the probabilities add up to one, which makes it easier to interpret the outputs of the model.
### Experiment setup
In machine learning, hyperparameters are parameters that need to be set before training the model, and they control how the model learns. Hyperparameters can significantly impact the performance of the model, and selecting the appropriate hyperparameters is an essential part of the training process.The Adam optimizer is a popular optimization algorithm that has been shown to be effective in deep learning applications. It uses adaptive learning rates and momentum to speed up convergence during training. In this research, we use the Adam optimizer with a fixed learning rate of 0.001 for both models. A learning rate of 0.001 is a common choice for many deep learning applications and has been found to work well in practice.
The batch size is a hyperparameter that determines the number of training examples used to compute the gradients of the loss function during each iteration of training. A batch size of 128 is a moderate choice that balances the tradeoff between faster convergence and efficient use of memory. Using larger batch sizes can lead to faster convergence, but it also requires more memory to store the gradients, while using smaller batch sizes can lead to slower convergence due to noisy gradient estimates. The number of epochs is a hyperparameter that determines the number of times the entire training dataset is presented to the model during training. In this research, we use 100 epochs for both models.
Figure 3: Architecture of the CNN model
The number of epochs can impact the final performance of the model, as training for too few epochs can result in underfitting, while training for too many epochs can result in overfitting.
### Comparison metrics
In the field of machine learning, evaluation metrics are used to measure the performance of a trained model. One common evaluation metric for classification tasks is accuracy, which measures the percentage of correctly classified examples in the test set. However, accuracy alone may not provide a complete picture of the model's performance, especially when dealing with imbalanced datasets. In addition to accuracy, precision, recall, and F1 score are also commonly used metrics to evaluate a model's performance on a classification task. Precision measures the proportion of true positives (correctly classified positive examples) among all predicted positives, while recall measures the proportion of true positives among all actual positives. The F1 score is the harmonic mean of precision and recall and provides a single score that balances both metrics.
To evaluate the performance of the trained models in this research, we report their accuracy, precision, recall, and F1 score on the test set. These metrics provide a comprehensive picture of the model's performance and can help us determine whether the model is performing well on all classes or if it's biased towards one or more classes. We compute these metrics by comparing the predicted labels of the models to the true labels in the test set. For example, accuracy is computed as the number of correctly classified examples divided by the total number of examples in the test set. Precision, recall, and F1 score are computed using the true positives, false positives, and false negatives for each class.
By reporting these metrics, we can compare the performance of the two trained models and determine which one performs better on the classification task. Additionally, we can identify areas where the models may be performing poorly and explore ways to improve their performance.
### Comparison models
#### 4.5.1 Standard Backpropagation
The standard backpropagation algorithm is the most common training method used in deep learning to update the weights and biases of a neural network. It is an algorithm that computes the gradient of the loss function with respect to the weights and biases of the network, and then uses this gradient to update the weights and biases to minimize the loss. The standard backpropagation algorithm consists of two phases: forward propagation and backward propagation. In the forward propagation phase, the input data is fed into the network, and the activations and outputs of each layer are computed. In the backward propagation phase, the error between the predicted outputs and the true outputs is propagated backward through the network to compute the gradient of the loss function with respect to the weights and biases.
**Algorithm:**
1. Initialize the weights and biases of the network with random values.
2. Feed the input data into the network to compute the activations and outputs of each layer.
3. Compute the error between the predicted outputs and the true outputs.
4. Compute the gradient of the loss function with respect to the weights and biases of the network using the chain rule of calculus.
5. Use the gradient to update the weights and biases of the network, typically using an optimization algorithm such as gradient descent or its variants.
6. Repeat steps 2-5 for a certain number of epochs or until the desired accuracy is achieved.
#### 4.5.2 Direct Feedback Alignment
**Algorithm:**
1. Initialize the weights and biases of the network with random values.
2. Feed the input data into the network to compute the activations and outputs of each layer.
3. Compute the error between the predicted outputs and the true outputs.
4. Compute the gradient of the loss function with respect to the weights and biases of the network using the chain rule of calculus.
5. Use the gradient to update the weights and biases of the network, typically using an optimization algorithm such as gradient descent or its variants.
6. Repeat steps 2-5 for a certain number of epochs or until the desired accuracy is achieved.
### Experiment procedure
One critical aspect of this methodology is to train each model using the same dataset and hyperparameters and to evaluate the models on a test set using comparison metrics. By training each model on the same dataset, we ensure that all models have access to the same information and that any observed differences in performance are due to the underlying design choices of the models, rather than differences in the training process. Similarly, using the same hyperparameters ensures that each model is optimized using the same criteria and that we are comparing models with equivalent levels of complexity. Once the models are trained, they should be evaluated on a test set using comparison metrics such as accuracy, precision, recall, or F1 score. These metrics provide a quantitative measure of how well each model performs on the same task, and by using the same test set and comparison metrics, we can compare the models in a fair and consistent manner.
## 5 Result
Our experimental results demonstrate that the proposed Instant parameter update approach can lead to improved performance of neural networks on benchmark datasets.
The results show that the proposed method achieves the highest accuracy of 97.84%, with the standard backpropagation method coming in second with an accuracy of 96.91%. The direct feedback alignment method achieves the lowest accuracy of 94.95%. In terms of precision and recall, the proposed method outperforms the other two methods by a significant margin, achieving precision and recall scores of 0.9885 and 0.9883, respectively. The standard backpropagation method also performs relatively well in terms of precision and recall, with scores of 0.9624 and 0.9626, respectively. However, the direct feedback alignment method has a noticeably lower precision score of 0.9492 and a lower recall score of 0.9412.
Overall, the results suggest that the proposed method is the best option for this particular task, based on the high accuracy, precision, recall, and F1 score. The standard backpropagation method also performs relatively well, but the direct feedback alignment method lags behind in all metrics.
The results indicate that all three methods achieve relatively high levels of accuracy, with the proposed method achieving the highest accuracy of 96.58%. The precision and recall scores also show that all three methods perform well, with the proposed method achieving the highest precision and recall scores of 0.9609 and 0.9658, respectively.
The standard backpropagation method achieves a slightly lower precision score of 0.9483, but its recall score of 0.9522 is similar to the other two methods. The direct feedback alignment method achieves the lowest precision score of 0.9385, but its recall score of 0.9422 is still relatively high.
Overall, the results suggest that the proposed method may be the best option for this particular task, based on the highest accuracy, precision, recall, and F1 score. However, the differences in performance between the three methods are relatively small, so further evaluation may be necessary to determine the best option for different scenarios or datasets.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Accuracy & Precision & Recall & F1 Score \\ \hline Standard Backpropagation & 96.91\% & 0.9624 & 0.9626 & 0.9687 \\ Direct Feedback Alignment & 94.95\% & 0.9492 & 0.9412 & 0.9432 \\ Proposed Method & 97.84\% & 0.9885 & 0.9883 & 0.9893 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Experimental results obtained from the MNIST dataset
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Method & Accuracy & Precision & Recall & F1 Score \\ \hline Standard Backpropagation & 95.22\% & 0.9483 & 0.9522 & 0.9469 \\ Direct Feedback Alignment & 94.22\% & 0.9385 & 0.9422 & 0.9367 \\ Proposed Method & 96.58\% & 0.9609 & 0.9658 & 0.9595 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results obtained from CIFAR-10 dataset
## 6 Conclusion
In this paper, we proposed a novel approach for neural network training that updates the trainable parameters at each hidden layer by calculating the error at that layer. Our experimental results demonstrate that this approach can lead to improved performance of neural networks on benchmark datasets, outperforming traditional backpropagation and existing algorithms.
Future work can explore the application of this approach to more complex architectures and tasks, such as natural language processing and image recognition. Additionally, the performance of our approach can be compared with other gradient-based optimization methods, such as second-order methods and stochastic gradient descent with momentum. Overall, the proposed approach has the potential to improve the training of neural networks and advance the field of deep learning.
|
2301.13330 | Efficient and Effective Methods for Mixed Precision Neural Network
Quantization for Faster, Energy-efficient Inference | For efficient neural network inference, it is desirable to achieve
state-of-the-art accuracy with the simplest networks requiring the least
computation, memory, and power. Quantizing networks to lower precision is a
powerful technique for simplifying networks. As each layer of a network may
have different sensitivity to quantization, mixed precision quantization
methods selectively tune the precision of individual layers to achieve a
minimum drop in task performance (e.g., accuracy). To estimate the impact of
layer precision choice on task performance, two methods are introduced: i)
Entropy Approximation Guided Layer selection (EAGL) is fast and uses the
entropy of the weight distribution, and ii) Accuracy-aware Layer Precision
Selection (ALPS) is straightforward and relies on single epoch fine-tuning
after layer precision reduction. Using EAGL and ALPS for layer precision
selection, full-precision accuracy is recovered with a mix of 4-bit and 2-bit
layers for ResNet-50, ResNet-101 and BERT-base transformer networks,
demonstrating enhanced performance across the entire accuracy-throughput
frontier. The techniques demonstrate better performance than existing
techniques in several commensurate comparisons. Notably, this is accomplished
with significantly lesser computational time required to reach a solution. | Deepika Bablani, Jeffrey L. Mckinstry, Steven K. Esser, Rathinakumar Appuswamy, Dharmendra S. Modha | 2023-01-30T23:26:33Z | http://arxiv.org/abs/2301.13330v2 | # Efficient and Effective Methods for Mixed Precision Neural Network
###### Abstract
For effective and efficient deep neural network inference, it is desirable to achieve state-of-the-art accuracy with the simplest networks requiring the least computation, memory, and power. Quantizing networks to lower precision is a powerful technique for simplifying networks. It is generally desirable to quantize as aggressively as possible without incurring significant accuracy degradation. As each layer of a network may have different sensitivity to quantization, mixed precision quantization methods selectively tune the precision of individual layers of a network to achieve a minimum drop in task performance (e.g., accuracy). To estimate the impact of layer precision choice on task performance two methods are introduced: i) Entropy Approximation Guided Layer selection (EAGL) is fast and uses the entropy of the weight distribution, and ii) Accuracy-aware Layer Precision Selection (ALPS) is straightforward and relies on single epoch fine-tuning after layer precision reduction. Using EAGL and ALPS for layer precision selection, full-precision accuracy is recovered with a mix of 4-bit and 2-bit layers for ResNet-50 and ResNet-101 classification networks, demonstrating improved performance across the entire accuracy-throughput frontier, and equivalent performance for the PSPNet segmentation network in our own commensurate comparison over leading mixed precision layer selection techniques, while requiring orders of magnitude less compute time to reach a solution.
Machine Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning
Mixed precision quantization is challenging as the search space is prohibitively large. For a network with \(L\) layers and \(n\) different precision choices per layer, \(n^{L}\) different network configurations exist, making brute force precision selection for each layer impractical. Various methods are proposed in literature to mitigate this by using metrics that try to estimate the impact of quantization of individual layers on overall task performance (Dong et al., 2019, 2020; Yao et al., 2021; Chen et al., 2021; Liu et al., 2021b), but they often target different criteria to optimize for, e.g. memory footprint or throughput, and fine-tune the resulting network with diverse algorithms. Hence, establishing a principled, direct comparison across quantization metrics based on reported results is not straightforward. The computational cost of many of these methods, combined with their reported inability to recover full precision accuracy at lower precisions (see Table 1 for results on the ResNet-50 benchmark for image classification (Deng et al., 2009)), underscore the need for i) new algorithms to solve this problem, and ii) a unifying framework for commensurate comparison.
Here, we introduce _Accuracy-aware Layer Precision Selection (ALPS)_ and _Entropy Approximation Guided Layer selection (EAGL)_, two efficient and effective new methods for choosing the bit-width configuration for network layers.
The most straightforward way to estimate the contribution of each layer to overall network accuracy, assuming independent contributions, is to lower the precision of each layer one at a time, fine-tune briefly, e.g. for one epoch, and record the accuracy difference as \(G_{l}\), the accuracy gained when keeping layer \(l\) at a higher precision. Intuitively, the network accuracy can be optimized by keeping the layers which show the largest gain in accuracy, \(G_{l}\) at higher precision, and choosing the layers with low \(G_{l}\) for further quantization. This method is referred to as Accuracy-aware Layer Precision Selection (ALPS). Given a fixed dataset, the computational complexity is proportional to the number of layers. For example, the time taken for ResNet-50 is approximately the same as training the network for 50 epochs. ALPS outperforms HAWQv3, a state-of-the-art mixed precision layer selection technique, while being relatively efficient to compute.
EAGL, on the other hand, is built upon the insight that the complexity required by a given layer to achieve good performance can be estimated using the entropy of the empirical distribution of its parameters after training, assuming appropriate regularization. This measure captures the relative frequency of trained layer parameters falling in each quantized bin. If many quantized weights fall in a few of the available bins, the layer is a good candidate for further quantization; whereas if the distribution of quantized weights is more uniform across bins, further compression by quantization can be detrimental to task performance. So, the gain in accuracy \(G_{l}\), of keeping a layer at higher precision is expected to be directly proportional to the entropy of the empirical weight distribution post training. Our formulation relies on empirical estimates for the marginal distributions of layer parameters, and thus, given a trained network, EAGL is a computationally inexpensive technique that does not require access to the training dataset, and can perform as well as the more direct accuracy aware method described above.
Once the layer-wise accuracy gain estimates as described above (using ALPS, EAGL, or another mixed precision technique from the literature), the computational cost of each
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Method} & Top 1 & \multirow{2}{*}{Compression Ratio} & \multirow{2}{*}{BOPS} \\ & Accuracy & & \\ & Drop & & \\ \hline ALPS & **-0.22** & & \\ (Ours) & **(76.13-76.35)** & & \\ \hline EAGL & **-0.17** & & \\ (Ours) & **(76.13-76.30)** & & \\ \hline HAQ & \begin{tabular}{c} 0.67 \\ (76.15 - 75.48) \\ \end{tabular} & 10.57\(\times\) & 520 \\ \hline \multirow{2}{*}{\begin{tabular}{c} Chen et al. \\ \end{tabular} } & \begin{tabular}{c} 0.85 \\ (76.13-75.28) \\ \end{tabular} & 12.24\(\times\) & - \\ \hline HAWQ-v3 & \begin{tabular}{c} 0.99 \\ (77.72-76.73) \\ \end{tabular} & - & 154 \\ \hline HAWQ-v2 & \begin{tabular}{c} 1.63 \\ (77.39-75.76) \\ \end{tabular} & 12.24\(\times\) & - \\ \hline HAWQ & \begin{tabular}{c} 1.91 \\ (77.39-75.48) \\ \end{tabular} & 12.28\(\times\) & - \\ \hline AutoQ & \begin{tabular}{c} 2.29 \\ (74.80-72.51) \\ \end{tabular} & 10.26\(\times\) & - \\ \hline Sun et al. & \begin{tabular}{c} 0.93 \\ (77.15-76.22) \\ \end{tabular} & 8.00\(\times\) & - \\ \hline Yu et al. &
\begin{tabular}{c} 1.85 \\ (77.56-75.71) \\ \end{tabular} & 11.03\(\times\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: **EAGL and ALPS demonstrate state-of-the-art results for classification (Deng et al., 2009) with ResNet-50 compared to techniques from literature**. Top-1 Accuracy Drop is the accuracy gap between the full precision (FP32) baseline and the mixed precision (MP) network. Negative value indicate that the MP network performs better than the FP32 network. Numbers in bracket indicate FP32 and MP scores, respectively. Compression Ratio is model compression w.r.t FP32 weights. BOPS is Giga-Bit Operations (Yao et al., 2021). HAQ keeps activations at FP32. “-” indicates data not published. FP32 accuracy is different for each method as authors use different checkpoints as the starting point. This highlights a need for a commensurate comparison framework, as an apples-to-apples comparison is challenging.
layer, and a maximum computational budget are determined, the next step is to select the precision of each layer from two or more precision choices in order to maximize the estimated task performance of the resulting network without exceeding the computational budget. In the experiments for this paper, a choice is made between two precisions, 4-bit and 2-bit, for each layer. Both weights and activations in a layer and quantized to the chosen precision. With the assumption that the sum of the layer-wise accuracy contributions reflects the total task accuracy of the network1, this problem of precision selection of the layers of a model is formulated formally as follows:
Footnote 1: See Appendix B for experiments demonstrating the additive nature of layer-wise accuracy contributions
Maximize \(\sum_{l=1}^{L}G_{l}P_{l}\), i.e., the total accuracy gained while keeping \(\sum_{l=1}^{L}C_{l}\leq B\), where \(L\) is the number of layers, \(G_{l}\) is the accuracy gain per layer, \(P_{l}\) is 1 if the precision of layer \(l\) is kept at the higher precision, and 0 otherwise, \(C_{l}\) is the computational cost of layer \(l\) in cycles, and \(B\) is the overall computational budget in cycles.
We solve this optimization problem efficiently by formulating it as a 0-1 Integer Knapsack Problem from combinatorial optimization (Martello and Toth, 1990). Finally, the layer precision choices from the optimization step are used to create a mixed precision network which is fine-tuned until convergence to get the final task performance. This process is depicted in Figure 1.
Using this evaluation framework, it is demonstrated that ALPS and EAGL outperform leading mixed precision layer selection techniques like HAWQ-v3 (Yao et al., 2021) and achieve state-of-the-art accuracy using ResNet-50 and ResNet-101 networks (He et al., 2016) for image classification (Deng et al., 2009) and PSPNet (Zhao et al., 2017) for semantic segmentation (Cordts et al., 2016) using 2-bit and 4-bit layer mixed precision networks.
The main contributions of this paper are (i) two elegant measures for estimating the impact of layer precision on task performance that achieve state-of-the-art results for mixed precision neural networks, and (ii) a framework for fair comparison across varying approaches that decomposes the bit-width configuration selection problem into three steps - accuracy gain estimation, layer-wise precision selection for a given budget using 0-1 Integer Knapsack optimization algorithm, and fine-tuning of resulting mixed precision network.
## 2 Related Work
Methods for optimizing the layer-wise precision of deep neural networks can be divided into several categories. Network Architecture Search methods can naturally incorporate precision as an additional variable but are generally computationally expensive (Wu et al., 2018; Chen et al., 2018; Yu et al., 2020; Sun et al., 2021). HAQ (Wang et al., 2019) and AutoQ (Lou et al., 2019), use reinforcement learning (RL) in the search process. A recent technique (Liu et al., 2021) uses RL in a more efficient manner and employs a separate network trained with RL which learns the distribution of precisions which work well as the network being optimized is trained with random, layer-wise precisions. There are also methods which learn the precision of each layer using stochastic gradient descent (SGD) with a modified loss function (Nikolic et al., 2020; Yang and Jin, 2020). Hessian Aware methods, like HAWQ (Dong et al., 2019) and its successors, HAWQ-v2 (Dong et al., 2020), HAWQ-v3 (Yao et al., 2021), and Chen et al. (Chen et al., 2021) evaluate the curvature of the loss function for each layer to predict, given the quantization error of that layer, how much quantization will increase the loss. HAWQ-v3 achieves state-of-the-art in network compression and throughput. A recent accuracy aware method (Liu et al., 2021) tries to estimate the accuracy gain for each layer using imprinting, which uses one epoch of fine-tuning per layer. Although the authors use insights from few-shot learning to speed up the layer-wise training, a drawback of this method is that it requires network surgery, removing downstream layers and replacing them with a single linear layer, thus evaluating each layer out of context.
## 3 Methods
Mixed precision quantization is based on the intuition that different layers in a network have different sensitivity to quantization and hence some layers are better candidates for more aggressive quantization than others. In particular, we are interested in solving the following problem.
Problem formulationGiven a choice between two precisions, \(b_{1}\) and \(b_{2}\) for each of the layers in a network, find the bit-width configuration for all the layers that achieves the best task performance while remaining within a given computational budget for the network.
The budget can be any target metric evaluating performance on a hardware inference platform, for instance, the memory footprint, latency, power, or a combination of multiple metrics. While the problem is formulated as a binary choice problem above, where the precision of a layer can be one of two precision choices, \(b_{1}\) or \(b_{2}\), this can be extended to the case of more than two precision choices as well. A model is considered for which a layer's computational cost is linear in its bit-width, but the techniques are equally applicable to other cost models such as quadratic.
The evaluation methodology, outlined in the next subsection, provides a unifying framework for commensurate
comparison between various popular mixed precision techniques from literature, as well as the novel layer selection methods introduced in this work. As emphasized earlier, there is a need for a unifying framework for evaluating techniques for mixed precision layer selection, as the lack of an apples-to-apples comparison between different methods proposed in literature and different objectives being optimized for, each with its own merits and limitations, makes it very challenging for a practitioner to select a candidate that is the most suitable for their problem of interest due to the absence of a coherent comparison methodology. It is hoped that the framework for comparison proposed here serves as a benchmark for evaluation of future methods as well.
### Evaluation Methodology
The proposed evaluation framework (Figure 1) is parameterized by a task, a network architecture, a target computational budget, and a fine-tuning recipe. It takes as input a per layer scalar measure of the expected relative task performance improvement when layer \(l\) is quantized at precision \(b_{1}\) instead of a lower precision \(b_{2}\), which is referred to as accuracy gain, \(G_{l}\). As an example, this is the accuracy gained by quantizing a layer in a classification network at 4-bit instead of 2-bit. The higher the gain, the more desirable it is to keep this layer precision at 4-bit instead of 2-bit. Different methods for mixed precision quantization propose different ways of quantifying this cost per layer. This is the key distinction between techniques evaluated in this framework.
Once the layer-wise gain estimates are obtained, the optimization problem described in the introduction is then solved in order to determine the precision setting for each layer. The details of the optimization step are described in the Appendix A. With the choice of precision for each layer from the optimizer, a mixed precision network is created with this configuration, which is then fine-tuned, to get the task performance of the mixed precision network chosen by the technique. The resulting task performance is the final output of the evaluation.
For comparison across different mixed precision techniques, each method provides an accuracy gain, \(G_{l}\) for each layer of a network, and then follows the above standardized evaluation protocol. For all of the experiments in this paper, 4-bit and 2-bit mixed precision choices are used in the problem setup because either previous published works have already demonstrated full precision accuracy for the networks under consideration at 4-bit (McKinstry et al., 2019; Esser et al., 2020), or fine-tuning with the proposed training recipe was sufficient to recover the task performance of a full precision network using a 4-bit network.
In all experiments, by sampling budgets in between the throughput of a 4-bit and a 2-bit network, multiple points along the throughput-accuracy frontiers were sampled for each technique, thereby demonstrating a very compelling test framework to compare different techniques across a wide variety of budgets in an unbiased manner.
The next two subsections describe the two methods introduced here for the problem of mixed precision layer selection described above - specifically the key sub-problem of accurately estimating accuracy gains, \(G_{l}\) for each network layer \(l\) - followed by implementation details specific to the evaluation framework. In the following section, the techniques are compared with state-of-the-art mixed precision techniques from literature and demonstrate superior performance, not only for a fixed budget, but along the entire frontiers.
### Accuracy-aware Layer Precision Selection for quantization (ALPS)
Most published approaches that propose solutions to the mixed precision layer selection problem are indirect proxies for what is really needed - an estimate for the accuracy contribution of each layer _after_ quantization aware training (Yao et al., 2021). In (Liu et al., 2021), the authors do measure the accuracy of a fine-tuned network, but indirectly using a second network. Here, we propose an accuracy aware method which directly measures how well each layer can be fine-tuned. The intuition is that layers that have the highest accuracy gain when quantized to a higher precision \(b_{1}\) (here, 4-bit) instead of a lower precision \(b_{2}\) (here, 2-bit) are good candidates to be kept at \(b_{1}\) in a mixed precision network. This approach is an intuitive, simpler variant of (Liu et al., 2021).
Algorithm 1 describes ALPS. For a given network, the default training parameters used for training the higher precision (here, 4-bit) model are used. Starting from a fully
trained 4-bit model, the precision of each convolution layer is lowered from 4-bit to 2-bit, one at time, and the resulting network is fine-tuned (with all layers at 4-bits and a single layer at 2-bit) for the equivalent of one training epoch (1 epoch for ResNet-50 and ResNet-101 and 250 iterations for PSPNet). The average training set performance (accuracy and loss for Resnet and PSPNet models, respectively) over the training period is then used to estimate the accuracy gained, \(G_{l}\) if the given layer remained at 4-bit. The assumption made here, that total network accuracy is the sum of the layer-wise accuracies, is justified theoretically (Zhou et al., 2018), and empirically in the Appendix B. To the extent that this linearity holds, ALPS avoids the need to try all combinations of layer precision settings. These measures are provided to the optimizer, along with the desired computational cost target, or budget, in order to choose a list of precisions for the network layers which maximizes the overall task performance of the network while satisfying the budget constraint.
### Entropy Approximation Guided Layer selection for quantization (EAGL)
EAGL uses empirical entropy to find a solution for the problem formulation above. The key insight in the development of this metric is that the entropy of the empirical distribution (Eq. (1)) of parameters of a layer in a network represents a measure of the required complexity to achieve the desired performance. With this insight, we develop an entropy-based metric for quantifying the advantage of keeping a layer at a higher precision.
Consider a layer \(l\) in a network at precision \(b\). Post quantization, a parameter in \(l\) can have a value equal to one of \(2^{b}\) distinct values, i.e., the weight can occupy one of \(2^{b}\) bins. When parameters of a layer are spread out evenly in the available bins, and the distribution approaches a uniform distribution, its entropy is high. However, if many of the weights are concentrated in a small subset of bins, the entropy of the distribution of quantized weights is low. Intuitively, a layer with a lower entropy is a better candidate for further quantization as the representation of the transformation learned by the layer parameters can be compressed further more easily. This is illustrated in Fig. 2 using 3 convolutional layers from a trained 4-bit ResNet-101 network.
Let \(\hat{p}^{b}\) be the empirical distribution of the layer's quantized parameters at precision \(b\). EAGL is based on the hypothesis that \(H(\hat{p}^{b})\)2 provides a good measure which can serve as the accuracy gain metric in the mixed precision problem formulation described earlier. Intuitively, \(H(\hat{p}^{b})\) represents the number of bits needed to represent the parameters in that layer according to the empirical distribution, as opposed to the actual allocated bits \(b\). If \(H(\hat{p}^{b})\) is close to the allocated bit-width \(b\), the layer is a bad choice for further quantization; however, if \(H(\hat{p}^{b})\) is much lower than \(b\), the layer is a better candidate for further quantization to a lower precision. This metric can be calculated practically as follows.
Footnote 2: \(H\) is used everywhere to denote the discrete entropy function.
Let the vector \(\mathbf{w}\) represent \(n\) trained parameters for layer \(l\), and denote \(\mathbf{w}=(w_{1},w_{w},\dots,w_{n})\) where \(w_{i}\) are quantized with bit-width \(b\). If \(w_{i}\) takes values from a discrete set \(\mathcal{A}\), then \(|\mathcal{A}|\leq 2^{b}\). For \(c\in\mathcal{A}\), the empirical probability distribution is computed by
\[\hat{p}^{b}_{l}(c)=\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}_{c}(w_{i}), \tag{1}\]
where
\[\mathbbm{1}_{c}(w_{i}):=\begin{cases}1&\text{if }w_{i}=c\\ 0&\text{if otherwise.}\end{cases} \tag{2}\]
Using \(\hat{p}^{b}_{l}\) from equation 1, we compute its entropy as
\[H(\hat{p}^{b}_{l})=-\sum_{c}\hat{p}^{b}_{l}(c)\;\log\hat{p}^{b}_{l}(c). \tag{3}\]
The value of keeping layer \(l\) at higher precision \(b\) is quantified as the value of this entropy measure \(H(\hat{p}^{b}_{l})\)(Eq. (3)).
Algorithm 2 describes EAGL. EAGL requires only one pre-trained model - one where all layers are quantized at precision \(b\). The empirical distribution of parameters is estimated for each of the layers in the network, and then \(H(\hat{p}^{b}_{l})\) is calculated for each layer. These layer-wise measures are
Figure 2: **Histogram of normalized counts of quantized weights in each bin for 3 layers of a trained 4-bit ResNet-101 network.** EAGL predicts that layers with lower entropy are better candidates for further quantization. For example, between the three layers shown above, EAGL predicts that quantizing the first layer (entropy \(=1.3977\) bits) to 2 bits has lower impact on task accuracy than quantizing the third layer (entropy \(=3.7368\) bits).
provided to the optimizer in the context of our full evaluation framework. EAGL is (i) easy to approximate, and (ii) does not need access to the training dataset to compute, making it faster and more generally applicable to other problem domains than the other metrics compared against.
This work makes use of the Minimal Description Length (MDL) principle (Rissanen, 1978), and its application, in particular, to learning the required precision for representing network parameters (Wallace, 1990; Hinton & van Camp, 1993; Hochreiter & Schmidhuber, 1997; Ullrich et al., 2017; Blier & Ollivier, 2018; Wiedemann et al., 2019). One approach is related to attempts to build MDL principle into the optimization function (Wallace, 1990; Hinton & van Camp, 1993), and derive a learning rule to minimize the number of bits needed to represent the weights (also see (Wiedemann et al., 2019) and references therein). Here, on the other hand, a slightly different approach is taken, and simple insights from information theoretic principles are used to propose a measure that uses the entropy of a layer's learned parameters as the metric to choose between layers to achieve the most profitable trade-off between bits allocated to represent layer parameters and network performance.
A probability distribution on model parameters \(W\) is assumed, and \(\hat{p}_{l}^{b}\) is used as an estimate of the true marginal distribution. This estimation is valid when the layer parameters are assumed to be independent and identically distributed (iid). Even if the iid assumption is not satisfied, it may be assumed that the parameters in each of the layers are identically distributed and that they satisfy a form of the strong-mixing (Pollicott & Yuri, 1998) condition, or equivalently that the parameters that are sufficiently far apart are independent. This practical assumption allows the use of a single instance of parameters learned using SGD and binning to estimate the distribution of these parameters and allows the efficient computation of the entropy terms. It is emphasized that no change to the loss function was made to minimize the measured entropy and that SGD with regularization is relied upon entirely to trade-off between entropy of the empirical distribution and network performance for a given precision.
### Implementation details
A layer's precision is used to determine the representation of its input activations and weights, which thus must match for a given layer. If an activation tensor provides input to multiple layers, all such layers must therefore have the same weight precision as well. Such layers are considered linked, and are treated as a single layer with a computational cost and accuracy gain measure that is the summation of its member values. All other data is represented with, and operations occur at, full precision. A unit of Bit Multiply-Accumulate operations (BMACs) is used for computational cost calculation, which is defined as BMAC = \(b\)*MAC, where \(b\) is the precision of the layer (weights and activations) in bits. Two tasks are evaluated here, image classification (Deng et al., 2009) using ResNet-50 and ResNet-101 (He et al., 2016), and semantic segmentation (Cordts et al., 2016) using PSPNet (Zhao et al., 2017). For both tasks, instead of choosing a fixed budget apriori, a sweep of computational budgets is used as evaluation points. This ensures that the evaluation framework is fair, as some techniques might demonstrate stronger performance for tighter budget constraints while others might be better for less constrained budgets; and hence evaluating only on a single budget can potentially unfairly benefit one technique over another. Eight equally spaced computational budgets are used for ResNet networks and 4 equally spaced computational budgets for PSPNet between the computational cost of a 4-bit and a 2-bit network.
The first and last layers are fixed at 8-bit, following a common practice for quantized networks (Zhou et al., 2016; Zhu et al., 2016), and intermediate layers with less than 128 input features are fixed at 4-bit. Layers with fixed precision do not count towards the computational budget. All models are fine-tuned with weights and activations quantized using LSQ (Esser et al., 2020). The ResNet networks are adapted from the PyTorch Model Zoo, and trained for image classification (Deng et al., 2009) with weight decay of 1e-4, initial learning rate of 0.01 with cosine learning rate decay (Loshchilov & Hutter, 2016), batch size 128, and using knowledge distillation (Hinton et al., 2015). \(160\times 160\) images are used to speed up training, while \(224\times 224\) images are used for testing. The initial quantization step-size for all layers being reduced from 4- to 2-bit is set to \(4s\), where \(s\) is the learned step-size loaded from the 4-bit checkpoint. Models at 4-bit, used as the initial checkpoint to fine-tune mixed precision models, are trained by fine-tuning the full precision checkpoint for 90 epochs. Each mixed precision method was then fine-tuned from the 4-bit trained model for 90 epochs for comparison. PSPNet is adapted from the PyTorch Segmentation Toolbox (Huang et al., 2018) and trained on semantic segmentation (Cordts et al., 2016) with an learning rate of 0.015, weight decay of 5e-5, and batch size 4. Models at 2-bit and 4-bit are trained by fine-tuning a full precision checkpoint for 80,000 and 10,000 iterations respectively, and mixed precision methods are fine-tuned from the 4-bit trained model for 40,000 iterations.
## 4 Results
The performance of ALPS and EAGL is compared against leading mixed precision techniques from the literature in Table 1 using the most commonly used benchmark, ResNet-50 for image classification (Deng et al., 2009). Both techniques have the highest accuracy (no accuracy drop) for the most efficient networks (lowest BOPs). As seen in the table, different techniques are quantized starting from different full precision accuracy checkpoints, trained using different training recipes, and report on different metrics. To mitigate these issues, HAWQ-v3, a leading mixed precision layer selection technique, was re-implemented and results are presented below from apples-to-apples comparison with HAWQ-v3. Other techniques could not be re-implemented due to the significant effort required to reproduce results without released code. Results are presented using three different networks on two large scale vision datasets below across many computational budgets.
### Evaluation using ResNet for image classification
The performance of ALPS and EAGL for layer selection is evaluated on image classification (Deng et al., 2009) using ResNet-50 and ResNet-101, using the methodology described in Section 3.1. The methods are compared with a re-implementation of HAWQ-v3 (Yao et al., 2021)3 and three baselines - a uniform cost baseline, where every layer is given the same value for staying at 4 bits, a method which ranks layers from first to last, and drops the precision of the first \(n\) layers greedily until the resulting network just meets the budget, and a third method which ranks layers from last to first. Instead of choosing a fixed budget apriori, eight different target computational budgets are sampled for evaluation, roughly at 95%, 90%, 85%, 80%, 75%, 70%, 65% and 60% of the computational cost of a 4-bit network. For each budget, networks created using each technique compared are fine-tuned for 90 epochs using 5 seeds.
Footnote 3: See Appendix D for details of the re-implementation of HAWQ-v3 for these experiments.
Results are shown in Fig. 3. ALPS and EAGL outperform the baselines and HAWQ-v3 at all budgets along the entire throughput-accuracy frontier for ResNet-50, demonstrating very strong performance (\(p=0.0079\) for all compute budgets except for EAGL at 60% where \(p=0.0238\), Wilcoxon rank-sum, \(N=5\)). For ResNet-101, EAGL matches or outperforms HAWQ-v3 at all 8 budgets. ALPS does so at 7 out of 8 budgets. ALPS provides solutions significantly better than HAWQ-v3 and EAGL for 4 of the budgets under consideration. For ResNet-50, layer-wise precision selection choices between the five techniques at the 70% budget are compared in the Appendix E. Even with a throughput budget significantly lower than a 4-bit network, with over half of the computation happening at 2-bit instead of 4-bit, EAGL and ALPS match full precision accuracy. For ALPS, one epoch of finetuning for each layer was chosen to minimize computational cost; rerunning the ResNet-50 experiment with two epochs per layer did not improve the result.
The actual computational cost to get the layer-wise estimates for ALPS, EAGL and HAWQ-v3 are listed in Table 2. EAGL's superior performance given the orders of magnitude saving in terms of computational cost makes it the most suitable candidate for layer selection in practice for getting a fast solution across a range of architectures and budgets. Python code used for computation of the EAGL metric is provided in the Appendix F.
### Evaluation using PSPNet on semantic segmentation
Next, ALPS and EAGL are evaluated on the semantic segmentation task (Cordts et al., 2016) using PSPNet. The methods are compared against HAWQ-v3 and a first to last baseline. Four different target throughput budgets are evaluated, roughly at 95%, 85%, 75%, and 65% of the computational cost of a 4-bit PSPNet network. For each budget, each technique is fine-tuned for 40,000 iterations using 5 different seeds. Results are shown in Fig. 4.
The task performance, i.e., mean Intersection over Union (IoU) for networks produced by ALPS and EAGL is not significantly different from HAWQ-v3 (\(p>0.1\) for all compute budgets, Wilcoxon rank-sum test, \(N=5\)), while all methods outperformed the first-to-last baseline (\(p=0.0079\) for all budgets, Wilcoxon rank-sum, \(N=5\)). The computational cost to get the layer-wise estimates for ALPS, EAGL and HAWQ-v3 are listed in Table 2. EAGL and ALPS are able to find good solutions to the mixed precision layer selection problem for this network, while being orders of magnitude faster to compute and easier to implement in practice. The results provide further evidence that both methods are scalable, task agnostic approaches that generalize to other networks and datasets as well.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & ResNet-50 & PSPNet \\ \hline
**EAGL(ours)** & **3.15 CPU seconds** & **<1 CPU minute** \\ \hline ALPS(Ours) & 166 GPU hours & 67 GPU hours \\ \hline HAWQ-v3 & 2 GPU hours & 1032 GPU hours \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Metric computational cost comparison for ResNet-50 and PSPNet.** The cost compared here includes only the computational resources required for the layer-wise metric estimation for each method, and not the cost for fine-tuning the resulting mixed precision network, nor the cost for training the initial 4-bit checkpoint, as these are the same for all techniques.
## 5 Conclusion and Discussion
Two new metrics for optimizing precision settings in a mixed precision network are introduced. Both techniques outperform leading techniques from the literature while requiring fewer computational resources to calculate and generalize across different tasks and architectures. Further evidence for the quality of these proposed metrics is provided in the Appendix C.
The ALPS metric introduced here is simple to understand intuitively, easy to implement and yet outperforms leading techniques from literature that ignore fine-tuning and use complex proxies which don't scale well in practice for the actual task at hand, both in terms of task performance of the proposed solution and time needed to compute the metric.
The entropy-based EAGL metric is elegant, and applicable even when only the model parameters are known but the training data is inaccessible, making it task agnostic and generalizable to other domains like unsupervised learning. Since it only requires access to a trained checkpoint, EAGL is remarkably fast, and is shown to select more accurate networks than several leading state-of-the-art methods on several datasets and networks in a fair comparison.
Given its strong performance, extremely low computational cost and ease of implementation, EAGL provides effective, practically usable solutions for network compression and faster inference. This makes EAGL the first choice to get good solutions extremely fast. The performance can sometimes be further improved by using ALPS. Although this paper considers only networks consisting of 2- and 4-bit layers, both methods can be used with more than two precision choices by changing the optimizer (e.g. (Chen et al., 2021)). Given the strong results across the entire throughput-accuracy frontier, as the number of hardware platforms supporting mixed precision networks grows, these simple, efficient, and effective approaches should be of great utility to practitioners, directly allowing the deployment of lower power, higher throughput solutions each time they are used.
Figure 4: **ALPS and EAGL meet or exceed the mean IoU of leading techniques on PSPNet across computational budgets.** Mean +/- standard deviation across 5 seeds at each budget.
Figure 3: **ALPS and EAGL perform better than leading mixed precision techniques using ResNet-50 for all computational budgets and meet or exceed the accuracy of ResNet-101 for 7 out of 8 computational budgets on (Deng et al., 2009).** Mean +/- standard deviation across 5 seeds for each technique at each budget. A network with all configurable layers at 4 bits has a computational budget of 100% and a network with all configurable layers at 2 bits has a computational budget of 50% in this plot. The first and last layer are 8-bit and intermediate layers with less than 128 input features are fixed at 4-bit.
## Acknowledgement
This material is based upon work supported by the United States Air Force under Contract No. FA8750-19-C-1518.
|
2308.02050 | FuNToM: Functional Modeling of RF Circuits Using a Neural Network
Assisted Two-Port Analysis Method | Automatic synthesis of analog and Radio Frequency (RF) circuits is a trending
approach that requires an efficient circuit modeling method. This is due to the
expensive cost of running a large number of simulations at each synthesis
cycle. Artificial intelligence methods are promising approaches for circuit
modeling due to their speed and relative accuracy. However, existing approaches
require a large amount of training data, which is still collected using
simulation runs. In addition, such approaches collect a whole separate dataset
for each circuit topology even if a single element is added or removed. These
matters are only exacerbated by the need for post-layout modeling simulations,
which take even longer. To alleviate these drawbacks, in this paper, we present
FuNToM, a functional modeling method for RF circuits. FuNToM leverages the
two-port analysis method for modeling multiple topologies using a single main
dataset and multiple small datasets. It also leverages neural networks which
have shown promising results in predicting the behavior of circuits. Our
results show that for multiple RF circuits, in comparison to the
state-of-the-art works, while maintaining the same accuracy, the required
training data is reduced by 2.8x - 10.9x. In addition, FuNToM needs 176.8x -
188.6x less time for collecting the training set in post-layout modeling. | Morteza Fayazi, Morteza Tavakoli Taba, Amirata Tabatabavakili, Ehsan Afshari, Ronald Dreslinski | 2023-08-03T21:08:16Z | http://arxiv.org/abs/2308.02050v1 | # FuNToM: Functional Modeling of RF Circuits Using a Neural Network Assisted Two-Port Analysis Method
###### Abstract
Automatic synthesis of analog and Radio Frequency (RF) circuits is a trending approach that requires an efficient circuit modeling method. This is due to the expensive cost of running a large number of simulations at each synthesis cycle. Artificial intelligence methods are promising approaches for circuit modeling due to their speed and relative accuracy. However, existing approaches require a large amount of training data, which is still collected using simulation runs. In addition, such approaches collect a whole separate dataset for each circuit topology even if a single element is added or removed. These matters are only exacerbated by the need for post-layout modeling simulations, which take even longer. To alleviate these drawbacks, in this paper, we present FuNToM, a functional modeling method for RF circuits. FuNToM leverages the two-port analysis method for modeling multiple topologies using a single main dataset and multiple small datasets. It also leverages neural networks which have shown promising results in predicting the behavior of circuits. Our results show that for multiple RF circuits, in comparison to the state-of-the-art works, while maintaining the same accuracy, the required training data is reduced by 2.8x - 10.9x. In addition, FuNToM needs 176.8x - 188.6x less time for collecting the training set in post-layout modeling.
Functional modeling, RF circuits, two-port analysis, neural network, post-layout.
## I Introduction
The growing demand for analog and Radio Frequency (RF) circuits, due to their broad applications, has led to a crucial need for automated circuit synthesis [1]. There are two classical automated synthesis approaches for analog and RF circuits, _i.e._ simulation- and model-based, which both need a circuit modeling tool. In the simulation-based approaches, the circuit modeling tool is invoked multiple times to evaluate the circuit and generate new parameter candidates to meet the desired specifications [2]. In addition, the modeling tool is used to generate the training set of the model-based approaches. Since both methods invoke the modeling tool frequently (especially the simulation-based), using the common modeling tool, _i.e._ SPICE simulation, is drastically time-consuming. Furthermore, technology scaling has caused much longer run times in both schematic and post-layout simulations [3].
Artificial Intelligence (AI) methods are an alternative to SPICE because of their speed and relative accuracy [4]. However, current approaches require a large amount of training data, which are still collected using simulation runs. So, they face the same time-consuming problem during training. Bayesian Model Fusion (BMF) is one of the common analog circuits functional modeling methods [5, 6, 7]. Neural Networks (NNs) also have shown promising results in all circuit design levels such as single-board computer, sizing, layout, System-on-Chip (SoC) design, and performance modeling [8, 9, 11, 12, 13].
Despite all these advances in functional modeling of circuits, current works [2, 5, 6, 7, 13, 14, 15, 16, 17] train directly from circuit design parameters (capacitance value of capacitors, size of transistors, etc.) to Performance of Interests (PoI). Therefore, such conventional modeling approaches require a unique training set for each topology, because design parameters change if the topology changes. This means their whole training process needs to be redone even if a single element is added or removed from the circuit. Moreover, they assign an individual feature to each design parameter, which makes their models high-dimensional for modern complex analog and RF circuits. The matter is only exacerbated by the post-layout modeling, where collecting data takes longer.
To combat all the aforementioned challenges in circuit functional modeling, we propose an NN-assisted two-port analysis-based method, FuNToM. The goals of FuNToM are threefold: (a) decreasing the required number of new training sets after modifying a circuit topology; (b) further reducing the number of training sets for each topology; (c) achieving a fast, accurate RF functional modeling method.
In order to decrease the required number of new training sets after modifying a circuit topology, we propose a method to properly model multiple topologies while using a single main dataset and multiple small datasets. For this purpose, we divide the circuit into multiple Electrical-networks (E-networks) and analyze them modularly via NNs. An E-network is a collection of electrical components, _e.g._ capacitors, resistors, etc., that are interconnected [18]. To modularly analyze E-networks, we leverage the Scattering parameters (S-parameters) concept. The S-parameter is a well-known powerful concept in RF circuits that is defined using two-port analysis and describes the circuit behavior of E-networks. In other words, in two-port analysis, the S-parameter is a \(2\times 2\) matrix that summarizes the circuit behavior of the associated E-network, such as input/output return losses, and insertion loss/gain [19]. Therefore, the S-parameter can be replaced with the associated E-network in the circuit analysis.
The main advantage of modularly modeling the circuit is its generality. In our case, by passing the S-parameters to the NN, the NN learns the relationships between the general format of E-networks' S-parameters and the PoI. Since the S-parameter works as a wrapper around the E-networks, the NN accurately determines the functionality of the circuit with any E-networks' topology. In other words, training via S-parameters is a one-time process which works properly for many circuit topologies because of its modularity.
There are two types of NN models in our approach: 1) One main circuit NN for determining the PoI using S-parameters of E-networks; 2) multiple sub-NNs that each determines the S-parameters of the associated E-network using the circuit design parameters within such E-network. Creating new datasets for sub-NNs is the only required modification in the dataset of the proposed approach when the circuit topology changes as the training set of the main NN works for all topologies. However, since the circuit is divided into multiple E-networks and each E-network has a separate dataset, the size of such datasets is small because each deals with only a few design parameters. This dividing of the circuit is another advantage of the proposed method that is especially prevalent when going from schematic to the post-layout modeling. Instead of collecting a new large dataset by running time-consuming post-layout simulations for the whole circuit with many design parameters, it is only needed to gather small datasets for the E-networks, each with only a few design parameters. Fig. 1(a) demonstrates the conventional modeling method using NNs, while Fig. 1(b) shows the big picture of the proposed approach. As it is shown, our approach requires much less training data than the conventional method to have the same accuracy in modeling of four different topologies. Note that the circuit elements (design parameters) that are not part of E-networks, _e.g._ the transistor in Fig. 1(b), are also considered in our proposed model as explained with more details in Section III-A. FuNToM supports all RF circuits, including active and passive components, as multiple of them are evaluated in Section IV.
We use the Circuit-Connectivity-Inspired-NN (CCI-NN) structure [2] in our main model to further reduce the number of training sets for each topology. As stated by Hassan-pourghadi _et al._[2], CCI-NN structure requires less training data in comparison with the Fully-Connected NNs (FC-NNs). Moreover, by condensing the design parameters using S-parameters, we may have a smaller design space than the conventional approach. If the number of design parameters in each E-network is more than the indexes in the S-parameter matrix, the NN model of the proposed approach will have fewer features than the conventional method and hence, less training sets will be required to have the same accuracy. This mainly has an impact on large circuits.
To validate our proposed method, FuNToM is tested by modeling the functionality of multiple phase shifters at the schematic level as well as multiple two-stage Low-Noise-Amplifiers (LNAs) at both schematic and post-layout levels. Our results show that using FuNToM, the required training set is reduced by a factor of 2.8x - 10.9x while maintaining the same accuracy in comparison to the state-of-the-art works. Also, the time for collecting the training set in the post-layout modeling using FuNToM is 176.8x - 188.6x faster. The results illustrate that FuNToM achieves an average R2-score of 0.95 when tested on 3,200 samples.
The main contributions of this paper can be summarized as follows:
* Proposing FuNToM, an efficient RF functional modeling method which achieves an average R2-score of 0.95 when it is tested on 3,200 samples with different topologies and specifications.
* 10.9x by using two-port analysis and dividing the circuit into multiple E-networks.
* 188.6x compared to state-of-the-art works.
## II Background & Related Work
### _Analog and RF Circuits Functional Modeling_
The goal in analog and RF circuits functional modeling is to approximate a PoI vector (\(y\)) of a given circuit by a function (\(\hat{f}\)) of the design parameters vector (\(\mathbf{x}\)), while the modeling error (\(e\)) is minimized. For this purpose, the SPICE is invoked as the ground truth function (\(f\)). In other words,
\[y=f(\mathbf{x},t) \tag{1}\] \[\left\{\begin{array}{l}y=\hat{f}(\mathbf{x},t)+e,\\ \text{subject to: minimize }e.\end{array}\right.\]
Note that \(t\) is used to account for time/frequency variant PoIs. Power gain is an example of a PoI for an LNA, while resistance values of resistors are examples of the design parameters
Fig. 1: An overview, concept, and the impact on the training set of (a) the conventional analog and RF circuits functional modeling method vs (b) the proposed modular S-parameters-based approach. The conventional method needs a whole separate dataset and model for each circuit topology while the proposed approach works properly for many circuit topologies with a single general format E-networks’ S-parameters dataset.
vector. The advantage of the functional modeling methods over SPICE is that once they get trained, they can determine the functionality of circuits, even when the design parameters change, in near zero time. This is because the circuit behavior is stored as weights for the functional modeling methods.
For minimizing the modeling error, different models, such as posynomials regression, genetic programming, BMF, and NN, have been studied [5, 13, 14, 15]. The fusion in BMF means passing information (prior knowledge) from the schematic model to the post-layout model to reduce the expensive post-layout performance modeling cost [1]. This passed information is then combined with a few post-layout training data points to solve the model coefficients via Bayesian inference [20].
The other goal of the functional modeling methods is to minimize the number of times that SPICE is invoked since it is time-consuming. Several studies have attempted to reduce the number of training samples while maintaining the same error, such as sparse regression, Co-Learning BMF (CL-BMF), hierarchical CL-BMF, and Circuit-Connectivity-Inspired NN (CCI-NN) [2, 16, 17]. In hierarchical Co-Learning BMF, the circuit is partitioned into multiple stages and CL-BMF is applied to each stage [17]. In this regard, they consider lower dimensional models and as a result the training set size is decreased. Among all aforementioned works, NNs have shown the lowest modeling error however, they need a large training set [2]. Similar to [17], Hassanpourghadi _et al._[2] break down a circuit into multiple stages, but they apply NN instead of CL-BMF for modeling. Zhao _et al._[21] first, model the transistor using NNs. By leveraging such a model, they perform DC and AC modeling for the circuit. Despite of the accuracy and efficiency, there is no word about the post-layout modeling.
### _Two-port Analysis & S-parameters_
The two-port analysis is used to determine the response of a two-port E-network against the applied signals to its terminals. The circuit behavior of E-networks is determined using two-port analysis [19]. Using two-port analysis, an S-parameter matrix is derived which is used for characterizing circuits. Fig. 2 shows the procedure of determining S-parameters for a two-port E-network: each port is excited by incident waves, \(a_{1}\) and \(a_{2}\), and the reflected voltage waves, \(b_{1}\) and \(b_{2}\), are measured. The reflected voltage waves are a function of the incident waves:
\[\begin{split}& b_{1}=S_{11}a_{1}+S_{12}a_{2},\\ & b_{2}=S_{21}a_{1}+S_{22}a_{2},\end{split} \tag{2}\]
where \(S=\begin{bmatrix}S_{11}&S_{12}\\ S_{21}&S_{22}\end{bmatrix}\) is called the S-parameter matrix.
S-parameters are calculated by setting \(a_{1}\) and \(a_{2}\) in Equation (2) to zero one by one and getting the ratios of the reflected waves over the incident waves as follows:
\[\begin{split}& S_{11}=\frac{b_{1}}{a_{1}}|_{a_{2}=0},S_{12}= \frac{b_{1}}{a_{2}}|_{a_{1}=0},\\ & S_{21}=\frac{b_{2}}{a_{1}}|_{a_{2}=0},S_{22}=\frac{b_{2}}{a_{2}}| _{a_{1}=0}.\end{split} \tag{3}\]
It is noteworthy that setting \(a_{i}=0\) means that the \(i^{th}\) port is terminated with a \(Z_{0}\) impedance so that the reflection from that port (\(a_{i}\)) becomes zero. \(Z_{0}\) equals 50 \(\Omega\) in most cases. Fig. 2(b) demonstrates an example of S-parameters extraction of a simple L-C E-network. Based on the reciprocity theorem, \(S_{12}=S_{21}\) for almost all passive E-networks such as the network mentioned in Fig. 2(b). Moreover, for the symmetric E-networks (see Fig. 6(b)), \(S_{11}=S_{22}\).
Fig. 3: (a) General circuit division into E-networks. Each rectangle is either a single element, parallel/series of elements, or a combination of parallel and series elements. (b) Main NN in the proposed approach for determining the Pol from S-parameters of E-networks (Equation (4a)) with CCI-NN structure. (c) Main NN in the proposed approach for determining the Pol from S-parameters of E-networks (Equation (4a)) with FC-NN structure. (d) Sub-NNs in the proposed approach for determining S-parameters from the design parameters of E-networks (Equation (4b)). (e) The original CCI-NN model structure [2] with the circuit design parameters as inputs. (f) The conventional FC-NN model in the analog and RF functional modeling with the circuit design parameters as inputs.
\(s_{i}\): S-parameters of the \(i^{th}\) E-network.
\(\mathbf{x_{i}}\): Design parameters vector of the \(i^{th}\) E-network.
Fig. 2: (a) Excited and incident waves in a two-port E-network which are used for calculating S-parameters. (b) An example of a simple L-C network with the calculated S-parameters. s = j2\pi f where f is the frequency. \(Z_{0}\) is the reference impedance (the impedance of excitation ports) which equals to 50 \(\Omega\) in most cases.
## III Proposed Approach
### _Neural Network Models_
Breaking down the top-level circuit into multiple stages is used for simplifying \(f\) in Equation (1) and decreasing the number of training sets [2]. In our approach, we have gone further and we break down the circuit into multiple E-networks. There are two types of models in our approach: 1) One main model for determining the PoI using S-parameters of E-networks and design parameters that are not included in any E-networks; 2) multiple sub-models that each determines the S-parameters of the associated E-network using the circuit design parameters within such E-network. In other words:
\[y =\hat{g}([s_{1},\ldots,s_{N},\mathbf{x_{R}}],t)+e_{1}, \tag{4a}\] \[s_{i} =\hat{h_{i}}(\mathbf{x_{i}},t)+e_{2,i}\quad i=1,\ldots,N, \tag{4b}\]
where \(s_{i}\) and \(\mathbf{x_{i}}\) are the S-parameters and design parameters vector of the \(i^{th}\) E-network (\(s_{i}=[S_{11,i},\ldots,S_{22,i}]\)), respectively assuming there are \(N\) E-networks. Indeed, \(\mathbf{x_{i}}\) represents the value of circuit elements _e.g._ capacitors, inductors, etc., inside the \(i^{th}\) E-network. \(\mathbf{x_{R}}\) is a vector of the circuit's design parameters that are not included in any E-networks _e.g._ the transistor in Fig. 1(b).
Fig. 3(a) shows a general circuit division into multiple E-networks while Fig. 3(b) and (c) illustrate two structures for our proposed main NN model. Both of them depict the NN representation of Equation (4a). Fig. 3(b) and Fig. 3(c) have CCI-NN and FC-NN structures, respectively. As it is shown, the CCI-NN structure [2] breaks a giant NN into smaller chunks while considering the sequential paths and concatenating all at a final 1-layer NN. It should be mentioned that the CCI-NN is trained with the same dataset as the FC-NN _i.e._ [\((s_{1},\ldots,s_{N},\mathbf{x_{R}})\); \(y\)]. We analyze both FC-NN and CCI-NN structures for our main NN and select the one which gives the best results. As stated by [2], CCI-NN requires less training data. Fig. 3(d) depicts our sub-NN structures which are representations of Equation (4b).
Our approach can be summarized as follows. The E-networks are identified and we assign a sub-NN to each. Then, the design parameters of each E-network are fed as inputs to the corresponding sub-NN for determining their S-parameters. In parallel, general S-parameters of E-networks are fed as an input to the main NN to determine the PoI. By combining these models (Equations ((4a)) and (4b), _i.e._\(\hat{g}([\hat{h_{1}}(\mathbf{x_{1}},t),\ldots,\hat{h_{N}}(\mathbf{x_{N}},t), \mathbf{x_{R}}],t)\)), the PoI is determined based on the circuit design parameters. It should be noted that \(\hat{g}\) in Equation (4a) is a function of \(s_{1},\ldots,s_{N}\) in general, not specifically for the \(s_{i}\) in Equation (4b). In other words, Equation (4a) finds the relationship between the PoI and general S-parameters, and in parallel Equation (4b) maps the design parameters to S-parameters in the desired circuit.
Equation (4a) is independent of the design parameters of E-networks. Therefore, it does not change if E-networks change, proving the generality and re-usability of our model for different E-networks. Furthermore, since each of the sub-NNs (Equation (4b)) takes a subset of design parameters as inputs, they are simpler than the conventional NN model (Fig. 3(e)). This significantly eases the collection of datasets for sub-NNs if E-networks change or during the post-layout modeling. Moreover, even in analyzing one specific circuit (without considering different E-networks), \(\hat{g}\) may be simpler than \(\hat{f}\) as S-parameters condense design parameters. This attribute is more pronounced in large RF circuits with many design parameters in each E-network. In addition, as mentioned in
Fig. 4: Final FuNuToM model by concatenating the sub-NNs and the main \(s_{i}\); S-parameters of the \(i^{th}\) E-network.
Fig. 5: Different ways for partitioning 8 topologies. (a) First scenario: each is divided into two E-networks. (b) Second scenario: there is only one E-network.
Section II-B, \(S_{12}=S_{21}\) for almost all passive E-networks and \(S_{11}=S_{22}\) for the symmetric ones. These cause having even simpler \(\hat{g}\). So, in such cases, the required training set is further reduced compared to the conventional method.
To summarize, in our proposed approach by concatenating the sub-NNs and the main NN, we determine the PoI by getting the circuit design parameters as inputs as shown in Fig. 4. As illustrated, sub-NNs determine the S-parameters of the associated E-network by getting the circuit design parameters within such E-network as inputs while the main NN determines the PoI by getting S-parameters of E-networks as inputs. Two structures (CCI-NN in Fig. 3(b) and FC-NN in Fig. 3(c)) are analyzed for the main NN. As explained and our evaluations show, the CCI-NN structure gives the best results. The original CCI-NN structure proposed by [2] is shown in Fig. 3(e) where the circuit design parameters are given as inputs and the PoI is the output. The conventional FC-NN model in the analog and RF functional modeling is also illustrated in Fig. 3(f).
### _Circuit Partitioning into E-networks_
There are multiple ways for partitioning a circuit into E-networks that each has some pros and cons. With decreasing the number of E-networks, the number of circuit elements at each increases as the total number of circuit elements is constant. So, the number of sub-NNs decreases while each gets more complicated. Usually, the required number of training data for a big complicated NN is more than the training data of multiple simple NNs. Moreover, with decreasing the number of E-networks, there would be fewer chunks in the CCI-NN structure of the main NN and since the input number of each chunk (\(s_{i}=[S_{11,i},\ldots,S_{22,i}]\)) is almost fixed, the width of the main NN intuitively reduces. However, in order not to lose the accuracy, it may get deeper.
Similar to all machine learning techniques, to find the best results, different scenarios need to be tested. As an example to show different ways for partitioning a circuit, we consider two scenarios. To this end, we consider 8 different topologies. In the first scenario, each of these topologies is divided into two E-networks as shown in Fig. 5(a). In the second scenario, the whole circuit is considered as one E-network as illustrated in Fig. 5(b). Table I compares the required number of training data between these two scenarios for the main NN, sub-NNs, and total to have the same accuracy. As it is summarized, the first scenario (Fig. 5(a)) requires more data for the main NN while it needs less training data for sub-NNs and in total. Note that using such analysis and trying different scenarios, currently the circuit partitioning is done manually, but it can be replaced with an automated approach going forward.
## IV Evaluation
In this section, we model the functionality of two different RF circuit types using FuNuToM and compare the results with the state-of-the-art modeling methods. We model multiple phase shifters at the schematic level, and multiple two-stage LNAs at both schematic and post-layout levels. The design parameters include all transistor widths as well as capacitor and inductor values. The inductors and capacitors are used from PDK models of 55nm BiCMOS library, _i.e._ quality factor and self-resonance of the inductors and capacitors are taken into account. All the SPICE simulations are run on a server with an NVIDIA TITAN V GPU and the frequency range for the simulations is 1 Hz-15 GHz. Also, all the NN models are built using the TensorFlow platform with the Adam optimizer and a learning rate of 0.001 and mean absolute error as the loss function. Moreover, all hidden layers have the RELU activation function. In order to avoid overfitting, the idea of early stopping [22] with the patience parameter of 125 is implemented. For this purpose, 10\(\%\) of the data are used for validation during the training phase. In order to validate the
Fig. 6: Phase shifters tested on FuNuToM. (a) General structure. (b) Network\({}_{1}\) and network\({}_{2}\) topologies.
Fig. 7: Main model results for phase shifters. (a) R2-score comparison with different hyperparameters used for training. Three hidden layers with 32 neurons at each layer is the simplest model that gives a high R2-score of 0.99. (b) Loss vs. epoch diagram while three hidden layers with 32 neurons at each layer model is used for training.
results properly, a random separate test set with the size of 10\(\%\) of the training set is used. As a default, all the results of FuNToM are based on the CCI-NN structure for the main NN unless that is mentioned.
### _Phase Shifter_
Fig. 6(a) depicts the general structure of the phase shifters tested on FuNToM. All combinations of E-networks shown in Fig. 6(b) are used for both networks\({}_{1,2}\). Therefore, in total, FuNToM is trained and tested on 6\(\times\)6=36 different phase shifter topologies. There are on average 8 design parameters at each topology. The tested specification statistics are listed in Table II. In total, 6,600 and 400 simulations are performed for the training set of the main NN and sub-NNs, respectively, per frequency. Since most of the specifications change with frequency, different datasets are needed to properly model the specifications over the frequency.
The number of hidden layers and neurons per layer for the main model varies between 2-4 and 16-64, respectively. Fig. 7(a) demonstrates a comparison between the R2-score achieved by different hyperparameters used for training the main model. Fig. 7(b) also shows the loss vs. epoch of training and validation data of the main model. As it is shown, the training stops before overfitting happens. All sub-NNs have two hidden layers with 32 nodes at each layer.
Table III summarizes the average R2-score of sub-NNs and main NNs tested on different specifications of phase shifters. The average R2-score of all models is around 0.97. Moreover, Fig. 8(a) depicts an example of a phase shifter tested on FuNToM. Fig. 8(b) shows the predicted insertion phase as an example modeling specification by FuNToM over frequency compared to the simulated values.
The required number of training data per frequency is compared with the state-of-the-art works [2, 13] while all achieve the same average R2-score of 0.95. We also consider both CCI-NN (Fig. 3(b)) and FC-NN (Fig. 3(c)) structures for the main NN of FuNToM. 1600 different phase shifters with the general structure of Fig. 6 in 36 topologies are given to all approaches for testing. Moreover, a wide range of design parameters is given to cover Table II. As summarized in Table IV, [13], and [2] need 8.2x and 2.8x more training data than FuNToM, respectively. Moreover, FuNToM with FC-NN structure requires 2.4x more training data than FuNToM with CCI-NN structure.
In order to further demonstrate the functionality of FuNToM, we apply it as the circuit simulator to a circuit sizing method. For this purpose, we employ the well-known multi-objective genetic algorithm NSGA-II [23] to optimize the phase-shifter shown in Fig. 9(a). An example of desired
Fig. 8: An example of a phase shifter tested on FuNToM. (a) Topology. (b) The simulated (ground truth) and the predicted insertion phase by FuNToM over frequency. Left: when both switches are open; right: when both switches are closed.
Fig. 10: Two-stage LNAs tested on FuNToM. (a) General structure; 1\({}^{st}\) stage is a common-source LNA while the 2\({}^{nd}\) stage is a cascode LNA. (b) Input and output matching networks for each stage.
Fig. 9: Using FuNToM as the simulator in circuit sizing by NSGA-II algorithm to meet the desired specification in Table V. (a) The given phase shifter topology. (b) Insertion phase results over frequency for different switch conditions.
specifications at 2 GHz frequency is listed in Table V. In the NSGA-II algorithm, the population size and number of generations are set to 30 and 30, respectively. The results when FuNToM and SPICE are used as the simulator in the sizing are summarized in Table V. It should be mentioned that the FuNToM results are obtained when the generated circuit using FuNToM is verified by SPICE. FuNToM is 20x faster than SPICE, while both meet the desired specifications. The insertion phase over frequency of the sized circuit using FuNToM is shown in Fig. 9(b) for different switch conditions. The network\({}_{1}\) and network\({}_{1}\) are designed to have an insertion phase of -45\({}^{\circ}\) and -90\({}^{\circ}\), respectively, at 2 GHz based on Table V. As expected, the phase shifter insertion phase is \(0^{\circ}\) when both switches are ON (meaning input and output are short-circuited). On the hand, when both switches are OFF, the insertion phase is -45\({}^{\circ}\)+(-90\({}^{\circ}\)) = -135\({}^{\circ}\).
### _Two-stage LNA_
Fig. 10(a) demonstrates the general structure of the two-stage LNAs tested on FuNToM. All combinations of E-networks shown in Fig. 10(b) are used for input and output matching. So, in total, FuNToM is trained and tested on 7\(\times\)7=49 different LNA topologies at each stage which results in 49\(\times\)49=2401 total topologies. There are, on average, 14 design parameters at each topology. The tested specification statistics for the two-stage LNAs are summarized in Table VI. In total, 14,000 and 600 simulations are performed for the training set of the main NN and sub-NNs, respectively, per frequency.
The number of hidden layers and neurons per layer for the main model varies between 2-4 and 32-128, respectively. Our results show that three hidden layers with 32 neurons at each layer is the simplest model that gives a high R2-score of 0.97. Table VII summarizes the average R2-score of sub-NNs and main NNs while tested on two-stage LNAs. Fig. 11(a) shows an example of a two-stage LNA tested on FuNToM. Fig. 11(b) illustrates the predicted transducer gain (\(G_{T}\)) as a sample modeling specification by FuNToM in the schematic modeling over frequency compared to the simulated values for each stage as well as the whole circuit.
Fig. 12 demonstrates the layout of Fig. 11(a) which is used for post-layout modeling. As mentioned earlier, for the post-layout modeling we only need to gather training sets for sub-NNs, while the training set of the main NN is the same as what has been gathered for the schematic analysis. Moreover, the idea of transfer learning [3] can be used to further reduce the number of training samples when going from schematic to the post-layout modeling. Fig. 13 shows the predicted transducer gain (\(G_{T}\)) in the post-layout modeling by FuNToM over frequency compared to the simulated values for each stage as well as the whole circuit.
The required number of training data
Fig. 11: An example of a two-stage LNA tested on FuNToM. (a) Topology; size of transistors are shown as \(\frac{W}{L}(\mu)\). (b) Schematic modeling: The simulated (ground truth) and the predicted transducer gain (\(G_{T}\)) by FuNToM over frequency. Left: Common-source (1\({}^{st}\)) stage; middle: Cascade (2\({}^{nd}\)) stage; right: the whole LNA (two-stage).
Fig. 12: The layout of Fig. 11(a).
as the post-layout modeling training time of FuNToM are also compared with [2, 13] while all are achieving the same average R2-score of 0.95. 1600 different two-stage LNAs with the general structure of Fig. 10 that are selected from 2401 available topologies are given to all approaches for testing. Moreover, a wide range of design parameters is given to cover Table VI. As summarized in Table VIII, [13], and [2] need 493.1x and 10.9x more training data than FuNToM, respectively. Moreover, FuNToM with FC-NN structure requires 3.3x more training data than FuNToM with CCI-NN structure. Furthermore, the average time for collecting the training set in the post-layout modeling of FuNToM is 188.6x and 176.8x less in comparison with [13] and [2], respectively.
To calculate this average time, we measure the average post-layout simulation time for generating each training sample needed in each approach. In [2, 13], the post-layout simulation is run on the whole and half of the circuit, respectively, for generating each sample which is very time-consuming considering 4-8 inductors are included. However, in FuNToM, 14,000 of the required data are for the main NN which their post-layout simulations are very fast as they include only transistors and modular S-parameters. Moreover, the 600 training data for sub-NNs are gathered by simulating the E-networks that include only one or two inductors which are relatively fast. So, as the results show, the average time for running each simulation for FuNToM is much less than the other approaches. We save around 240,000 hours (160,000 \(\times\) 5,430s \(-\) 14,600 \(\times\) 30.7s = 240,000h) by using FuNToM instead of [13] in such a post-layout training.
## V Conclusion
This work presents a novel RF functional modeling method, FuNToM. FuNToM divides circuits into multiple E-networks and analyzes them modularly via NNs. Leveraging such a modular analysis using the concept of S-parameters has made the NN models used in FuNToM general enough that work for multiple circuit topologies. To validate our work, FuNToM is tested on more than 1,600 phase shifters and 1,600 two-stage LNAs. The results indicate that FuNToM reduces the size of the required training data by 2.8x - 10.9x in comparison with the state-of-the-art works while maintaining the same accuracy. Moreover, FuNToM needs 176.8x - 188.6x less time for collecting the training set in post-layout modeling.
|
2302.05823 | Data efficiency and extrapolation trends in neural network interatomic
potentials | Over the last few years, key architectural advances have been proposed for
neural network interatomic potentials (NNIPs), such as incorporating
message-passing networks, equivariance, or many-body expansion terms. Although
modern NNIP models exhibit small differences in energy/forces errors,
improvements in accuracy are still considered the main target when developing
new NNIP architectures. In this work, we show how architectural and
optimization choices influence the generalization of NNIPs, revealing trends in
molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using
the 3BPA dataset, we show that test errors in NNIP follow a scaling relation
and can be robust to noise, but cannot predict MD stability in the
high-accuracy regime. To circumvent this problem, we propose the use of loss
landscape visualizations and a metric of loss entropy for predicting the
generalization power of NNIPs. With a large-scale study on NequIP and MACE, we
show that the loss entropy predicts out-of-distribution error and MD stability
despite being computed only on the training set. Using this probe, we
demonstrate how the choice of optimizers, loss function weighting, data
normalization, and other architectural decisions influence the extrapolation
behavior of NNIPs. Finally, we relate loss entropy to data efficiency,
demonstrating that flatter landscapes also predict learning curve slopes. Our
work provides a deep learning justification for the extrapolation performance
of many common NNIPs, and introduces tools beyond accuracy metrics that can be
used to inform the development of next-generation models. | Joshua A. Vita, Daniel Schwalbe-Koda | 2023-02-12T00:34:05Z | http://arxiv.org/abs/2302.05823v2 | # Data efficiency and extrapolation trends in neural network interatomic potentials
###### Abstract
Over the last few years, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in energy/forces errors, improvements in accuracy are still considered the main target when developing new NNIP architectures. In this work, we show how architectural and optimization choices influence the generalization of NNIPs, revealing trends in molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using the 3BPA dataset, we show that test errors in NNIP follow a scaling relation and can be robust to noise, but cannot predict MD stability in the high-accuracy regime. To circumvent this problem, we propose the use of loss landscape visualizations and a metric of loss entropy for predicting the generalization power of NNIPs. With a large-scale study on NequIP and MACE, we show that the loss entropy predicts out-of-distribution error and MD stability despite being computed only on the training set. Using this probe, we demonstrate how the choice of optimizers, loss function weighting, data normalization, and other architectural decisions influence the extrapolation behavior of NNIPs. Finally, we relate loss entropy to data efficiency, demonstrating that flatter landscapes also predict learning curve slopes. Our work provides a deep learning justification for the extrapolation performance of many common NNIPs, and introduces tools beyond accuracy metrics that can be used to inform the development of next-generation models.
## 1 Introduction
Machine learning (ML) has proven extremely valuable in the materials and chemical sciences as a tool for data analysis and generation [1, 2, 3, 4]. Particularly in atomistic simulations, ML-based models offer a compelling balance between high-accuracy, high-cost quantum chemistry calculations and low-accuracy, low-cost classical force fields [5, 6, 7]. Whereas several models based on kernel regression or Gaussian processes have been proposed [8, 9, 10, 11, 12, 13, 6], recent developments in neural network (NN) interatomic potentials (IPs) have shown promise due to their low inference time, scalability to large datasets, and high accuracy in predicting potential energy surfaces (PESes) [14, 7, 9]. These methods have been used for a variety of applications, including molecular simulation, excited-state dynamics, phase transitions, chemical reactions, and more [15, 16, 17, 18, 9].
Despite their successes, NNIPs still struggle with data efficiency and robust generalization. Over the last few years, several different model architectures were proposed to reduce errors in PES fitting, decrease the amount of data required to train the models, and improve predictions for configurations beyond the training domain. In particular, NN architectures incorporating physics concepts such as directional representations and equivariance [19, 20, 21, 22, 23] or many-body interactions [24, 25, 26] have gained popularity due to higher accuracy and data efficiency. Nevertheless, recent works show that accuracy metrics over datasets are insufficient to quantify the models' quality in production simulations and motivate the use of alternative metrics such as computational speed or simulation stability [27, 28, 29, 30, 31, 32]. Different NNIP models may have similar test accuracy, but completely different extrapolation ability [33, 34]. This begs the question: **which metrics can distinguish between NNIPs with similar test error but different extrapolation behavior?**
In this work, we investigate trends in robustness and extrapolation behavior of NNIPs, and propose a metric to predict their stability in production simulations and data efficiency. In particular, we provide the following contributions:
* Using literature data and two state-of-the-art NNIP models (NequIP and MACE), we show that extrapolation trends can be obtained from error metrics in the 3BPA dataset. For example, the NNIPs are able to recover the underlying PES despite being trained to noisy labels, and have extrapolation errors that follow scaling relations against in-domain accuracy metrics. Although these trends hold across different architectures, we show that this scaling relation is not recovered in the high-accuracy regime, and thus cannot be used to downselect robust models.
* To circumvent the limitations above, we propose that loss landscapes (LLs) can provide evaluation strategies for NNIPs beyond accuracy metrics. Qualitative inspection of LLs explains some heuristic training regimes for NNIPs, such as the use of higher weights for forces loss, weight cycling, or separating learning rates for different parts of the architecture, thus providing theoretical justification for certain hyperparameters.
* Using a large-scale study with NequIP and MACE, we show that flatness of LLs correlates with model robustness. In particular, we quantify the loss entropy around the optimized models and relate them to errors in the extrapolation regime. Furthermore, we show that molecular dynamics simulations of models with flatter LLs exhibit less unphysical behavior than their sharper counterparts.
This combination of theoretical justification, benchmarking, and a new metric can aid the development of newer NNIP architectures with higher accuracy, trainability, and robustness.
## 2 Background
**NNIPs:** NN-based force fields have been first proposed using feedforward NNs and symmetry-based representations [14]. In these systems, improvements are achieved by designing new representations to better capture the atomic environment [35, 36, 37, 38, 39, 40, 41]. More recently, message-passing neural networks (MPNNs) [42] showed remarkable ability to fit PESes using learned representations. In this area, most works compare models according to their accuracy with respect to standardized datasets, such as QM9 [43, 44], MD17 [8], and others. MPNN-based potentials often vary according to their interaction blocks, handling of symmetry operations, and general architectural choices.
**NN representation capacity and generalization:** although NNs have a large number of parameters, obtaining NN-based models with low generalization error is not uncommon. Preventing overfitting may require regularization techniques such as changing the loss function, e.g. with weight regularization or decay, augmenting the dataset, or using training protocols such as adaptive learning rates, dropout, and more [45, 46, 47]. However, these are not requirements to controlling the generalization error [48]. For example, NNs with good generalization capacity have been shown to overfit to random labels and input data [48]. This suggests that, given enough parameters, even architecturally regularized NNs exhibit wide representation capacity for arbitrary data sets. In some cases, however, training to noisy labels can only be overcome by adapting architectures, regularization techniques, and correcting the loss function [45].
**Loss landscapes (LLs):** the shape of the LL is strongly correlated with the trainability of NN architectures. The minimization of the empirical risk is easier when the (non-convex) LL is smoother, as it yields more predictive gradients [49, 50]. Furthermore, LLs with several local minima exhibit lower trainability than their smoother counterparts [51, 47]. Some works proposed that flatter LLs are also related to lower NN generalization error [51, 52, 53]. Although this relationship is often complicated by factors such as batch size [46], normalization [54], or weight decay, the sharpness of the LL is the most predictive of generalization error in NNs [55]. Whereas this sharpness/flatness can be quantified using the Hessian of the loss [46, 47] or assumptions of a prior on the weights [56], visualizations of the LL have proven useful to illustrate these minima with respect to the weight space [57, 58, 59, 60] without the cost associated to calculating the full Hessian. Analogies with potential energy landscapes have also been performed to explore the cost functions of other machine learning methods and explain the shape of these optimization landscapes [61, 62].
## 3 Methods
### Visualizing loss landscapes
The loss landscape \(\ell\) of a neural network can be plotted by evaluating the loss function \(\mathcal{L}\) along a trajectory between two parameter sets \(\mathbf{\theta}\) and \(\mathbf{\theta}^{\prime}\). The simplest approach is to linearly interpolate the weights [57], choosing a scalar \(t\in[0,1]\) such that \(\mathbf{\theta}(t)=(1-t)\mathbf{\theta}+t\mathbf{\theta}^{\prime}\). Then, the loss landscape \(\ell\) for a model becomes
\[\ell(t)=\mathcal{L}\left(\mathbf{\theta}(t)\right)=\mathcal{L}\left((1-t)\mathbf{ \theta}+t\mathbf{\theta}^{\prime}\right). \tag{1}\]
In this work, the loss \(\mathcal{L}\) is evaluated on the training set of the model with parameters \(\mathbf{\theta}\).
In the absence of the reference weights \(\mathbf{\theta}^{\prime}\), the LL can be constructed by sampling a random vector \(\mathbf{\delta}\) in the parameter space and plotting the LL around \(\mathbf{\theta}\) as
\[\ell(t;\mathbf{\delta})=\mathcal{L}\left(\mathbf{\theta}+t\mathbf{\delta}\right), \tag{2}\]
where the domain of \(t\) is appropriately chosen to span a neighborhood of \(\mathbf{\theta}\). This notion can be extended to 2D LLs by taking two orthogonal vectors \(\mathbf{\delta}_{1},\mathbf{\delta}_{2}\) such that
\[\ell(t_{1},t_{2};\mathbf{\delta}_{1},\mathbf{\delta}_{2})=\mathcal{L}\left(\mathbf{\theta} +t_{1}\mathbf{\delta}_{1}+t_{2}\mathbf{\delta}_{2}\right), \tag{3}\]
with scalars \(t_{1},t_{2}\) chosen to span a (two-dimensional) neighborhood of \(\mathbf{\theta}\). These approaches have been used to study the LLs of NN classifiers in image datasets, interpolate between sets of classifiers, and explore the loss function around degenerate minima [63, 64, 65, 46].
One challenge when analyzing LLs is comparing different models according to their parameters. Because activation functions such as ReLU allow for scale-invariance of NN weights, especially when coupled with batch normalization techniques, the magnitude of the vector \(\mathbf{\delta}\) is not transferable from model to model. This prevents a fair comparison of LLs, curvatures, and sharpness metrics. To account for this, we use the filter normalization technique proposed by Li et al. [58]. Therein, each random vector \(\mathbf{\delta}\) is normalized by the scale of each filter \(i\), in each layer \(j\) of the NN, i.e.
\[\mathbf{\delta}^{(i,j)}=\frac{\mathbf{\delta}^{(i,j)}}{\|\mathbf{\delta}^{(i,j)}\|}\|\mathbf{ \theta}^{(i,j)}\|, \tag{4}\]
where \(\|.\|\) is the Frobenius norm. Then, the LL is plotted according to the filter-normalized vector \(\mathbf{\bar{\delta}}\),
\[\ell(t;\mathbf{\bar{\delta}})=\mathcal{L}\left(\mathbf{\theta}+t\mathbf{\bar{\delta}} \right), \tag{5}\]
and analogously for 2D LLs.
Although informative, sampling LLs can have cost comparable to training the NN, since evaluating the loss for each interpolated weight in each direction is equivalent to one training epoch. Depending on the number of parameters and directions \(\mathbf{\delta}_{n}\) under analysis, the loss may be evaluated over the entire training dataset a large number of times.
### Quantitative comparison of loss landscapes
Despite the usefulness of visualizing loss landscapes, comparing them beyond qualitative insights requires metrics to differentiate them. The most commonly used metric in the field is the curvature of the LL [55], which is related to the magnitude of the eigenvalues of the loss Hessian. Although informative, the Hessian is a local property and cannot fully capture "valley-like" degeneracies in loss landscapes. Furthermore, computing the full Hessian for a system with millions of training parameters would be intractable. Thus, alternative metrics to compare LLs are required.
In the case of NNNPs, which are often trained both to energies and forces, two LLs are obtained per model, one for each target value being trained to. Although they cannot be completely disentangled, both LLs should be compared simultaneously to assess model performance. To derive one of such metrics, we propose the use of "loss entropy" [66, 51] to quantify loss flatness around optimized minima. Although variations of this quantity he been proposed, we use the formula
\[S(T)=k\log\left[\sum_{t}\exp\left(\frac{-\bar{\ell}\left(t\right)}{kT}\right) \right], \tag{6}\]
where \(S\) is the entropy of the loss landscape \(\bar{\ell}(t)\) computed with respect to energy or forces and averaged over \(N\) orthogonal, filter-normalized weight displacements,
\[\bar{\ell}\left(t\right)=\frac{1}{N}\sum_{n}\ell(t;\mathbf{\bar{\delta}}_{n}). \tag{7}\]
and \(kT\) is a weighting parameter that quantifies the "flatness" of the LL with respect to a certain acceptable threshold of training loss. This is similar to distributions of microstates accessible in a given "temperature," and ensures that low-loss states contribute to a much larger entropy than high-loss states.
As the entropy of the LL of a NNIP takes into account both energy and forces losses, we adopt a strategy similar to the one used during training of the NNs by balancing energy and forces losses with a weighted sum,
\[S=\alpha S_{E}(T_{E})+(1-\alpha)S_{F}(T_{F}), \tag{8}\]
where \(\alpha\) is a dimensionless parameter between 0 and 1 that weights the entropy of the energy loss (\(S_{E}\)) and forces loss (\(S_{F}\)). As these losses have different units, they have to be computed with different normalization parameters \(k_{E}T_{E}\) and \(k_{F}T_{F}\), each of which takes into account "thermal randomness" of the training loss. For simplicity, in this work, we adopt \(k=1\) as a dimensionless parameter, and assign the adequate units to the "thermal error" for better interpretability of the loss entropy.
### NNIPs and dataset
**NequIP**[23] is an equivariant NNIP that uses Clebsch-Gordon transformations and spherical harmonics to incorporate equivariance in the model. NequIP demonstrates state-of-the-art accuracy in several datasets, data efficiency, and has been employed to simulate a variety of organic and inorganic systems.
**MACE**[25] is an equivariant NNIP that uses higher-order messages to efficiently embed information beyond two-body interactions in traditional MPNNs. The model demonstrates state-of-the-art performance in a variety of benchmarks, faster learning, and competitive computational cost.
**Other NNIPs** proposed recently and not benchmarked in this study include Allegro [26], GemNet [67], DimeNet [68], HIP-NN [69], NewtonNet [70], BOTNet [24], SchNet [71], PaiNN [72], and others [21, 37, 39, 73].
Training details are provided in Appendix A.
**The 3BPA dataset**[28] under study in this work was chosen due to its previous use in the literature for benchmarking NNIP models in extrapolation behavior [24, 25, 26]. The benchmark involves training models on low-temperature samples and evaluating their performance on held-out samples from high-temperature simulations. Distributions of energies and forces of the 3BPA dataset are shown in Appendix D.1, Fig. S2. Additional results using the ANI-Al dataset [74] be found in Appendix D.2.
## 4 Results and Discussion
### Robustness to noise and extrapolation trends in NNIPs
When comparing NNIPs, metrics of interest typically include errors in predicting forces and energies of a test dataset, and are appropriately used as baselines for assessing model quality. However, accuracy metrics are not necessarily predictive of extrapolation power [29]. A first hypothesis to consider when analyzing NNIP extrapolation is whether NNIPs can learn a PES despite being trained on noisy data. Although this test does not measure extrapolation to out-of-distribution data, it verifies whether models are expected to overfit to corrupted training data, which would lower their robust generalization power. For example, NN-based classifiers can overfit to random labels in image datasets or to completely random inputs [48], even in architectures with good generalization error that are designed to prevent overfitting. Most NNIPs have enough parameters to memorize the training data, but standard regularization and architectural choices can curb overfitting in NNIPs, leading to lower generalization errors.
To test this hypothesis, we trained four different NNIPs to the 3BPA dataset and analyzed their _training_ error. Following the lead from the deep learning literature [48], we then gradually corrupted the labels of the training set by randomly adding a sample from \(\mathcal{N}(0,\sigma\cdot\sigma_{\text{DFT}})\) to the true forces, where \(\sigma_{\text{DFT}}\) is the standard deviation in the forces predicted by density functional theory (DFT), and \(\sigma\) is a scalar ranging from 0.0 to 0.1 (see Appendix D.1). In principle, NN regressors with arbitrary levels of expressivity (or absent regularization) could achieve low training error even in these noisy PESes. Figure 0(a) shows the error for NNIPs trained to the 3BPA dataset with corrupted forces, and tested on the original, un-corrupted data. When the forces are not corrupted, models exhibit reasonable training errors lower than 40 meV/A, as expected by their nominal performances in energy prediction [23, 25]. However, even small amounts of noise in the forces prevent the noisy dataset from being memorized with high accuracy, with the training loss plateauing instead of tending to zero. The ability of the models to predict the noisy forces saturates at the limit of the noise, indicating that these NNIPs do not memorize the high-frequency labels. On the other hand, when the test error of the
models trained with corrupted labels is computed with respect to the uncorrupted dataset, the error is substantially smaller than the noise baseline (see the "original" panel of Fig. 1a). Thus, the NNIPs under analysis are able to learn the underlying PES in the 3BPA dataset despite the added noise.
Contrary to the overfitting hypothesis, these results suggest that data redundancy in the training set may help NNIPs to "denoise" the data. To illustrate this effect, we show in Appendix C how data redundancy downplays the effect of external noise in a toy system. Fig. S1 shows how a large number of training points can counterbalance the effect of external noise when predicting the original, non-noisy data for the case of linear regressor models. In this case, the model averages out the noise and recovers the true function even at high levels of added noise. On the other hand, at the low-data regime, the regression model is unable to recover the true function, and its error quickly grows. Although the results from the linear model may not directly translate to the case of NNs, Fig. 1a shows that, to an extent, NNIPs are able to "denoise" energies from the dataset due to architectural, training, or implicit data regularization.
To verify if this behavior was specific to the 3BPA dataset or could generalize to other datasets, we corrupted the energies or forces of the ANI-Al dataset and trained different models on the noisy values (see Appendix D.2), obtaining similar results even when only energies were considered. As the ANI-Al example exhibited even better results than the 3BPA, we tested whether a drastic increase in the injected noise could still be denoised by the NNIPs under study. Following the results of Fig. S1, we trained the two NNIP architectures under study to PESes with energy noises up to twenty times higher than those in Fig. 1 (see Fig. S12), and up to twice the standard deviation of the original dataset distribution of the ANI-Al dataset. Although the distribution of per-atom energies shows that the noisy PES is completely different than the original one (Fig. S12b,c), all models succeeded in modeling the underlying PES below the error baseline (Fig. S12). As also illustrated by the toy example, the performance of the models degrades as extremely large amounts of noise are added. Nonetheless, errors with respect to the non-noisy dataset are remarkably low considering the corruption baselines.
The toy example from Appendix C can be considered an upper bound of the "dataset denoising" ability, given that a functional form of the inputs is known and the model could, in principle, fit perfectly to the data. As the trends of NNIPs trained on the 3BPA or ANI-Al potential approach this behavior, it can be concluded that the two NNIP architectures are able to "denoise" the external noise added to the datasets, possibly due to data redundancy. As generalization tests
Figure 1: **a**, Force RMSE values (eV/Å) for models trained to only the forces of increasingly noisy versions of the 3BPA dataset. The models trained on noisy data are then evaluated on their noisy training set (“noisy” panel) or on the original, uncorrupted dataset (“original”). The dashed lines correspond to the amount of noise that was added to the DFT forces in units of eV/Å. The legend denotes the maximum body order of the MACE models (\(v\)), the number of interaction layers in the NequIP models (\(n\)), or the maximum order of the model irreps (\(L\)). All MACE models in **a** used only 2 interaction layers. **b**, Relationship between extrapolation ability, quantified using the slope of the test errors on the 3BPA dataset with increasing temperature (“extrapolation slope”), and the test error at 300 K. Slopes were computed by performing a linear fit to values taken from the literature for MACE [25], BOTNet [24], and others [26] (see Fig. S3). The dashed line is a visual guide. **c**, Forces RMSE on the 300 K test set of several NequIp and MACE models with different architectures, but similar testing errors on the 300 K split. (top) Correlation between test errors on all splits (300, 600, and 1200 K) and test error on the 300 K split for models with similar test errors. (bottom) Average time-to-failure of MD simulation for each of the models with similar test errors.
assume the model is being tested on unseen data, it is not clear whether the accuracy reflects the quality of the model predictions or simply their ability to reproduce local environments existing in the training data. This is particularly important in the high-data regime, when the test dataset may be correlated to the train dataset in non-obvious ways.
To bypass this problem, alternative strategies were proposed to measure generalization power, including separating train-test splits according to sampling temperature, as is the case of the 3BPA dataset [28]. While testing a model on held-out samples from high-temperature simulations is typically considered an independent evaluation of its performance, we have found that extrapolation errors are correlated with low-temperature test errors across various model architectures. This is performed by fitting a linear model to the errors on the 3BPA testing sets at 300, 600, and 1200 K from the literature [24, 25, 26] (see Fig. S3), then using the slope of the fitted line as associated metric. Fig. 1b shows that all the models that were trained only to 3BPA frames at 300 K follow an approximate linear scaling relation between the extrapolation slope and the log of the low-temperature errors. This correlation between the low- and high-temperature data does not strictly preclude the use of the 3BPA dataset for assessing a fitted model's extrapolation abilities, as the extrapolation slope represents how much the accuracy of a given model degrades as the sampling temperature increases. Nevertheless, it suggests that data and model regularization effects may be enforcing extrapolation trends in wide error ranges. For example, a model like ACE [24, 41] is known to provide functional forms that aid extrapolation beyond the training data [28]. Despite this, the ACE model for 3BPA follows the same trends of more complex message-passing NNs. In fact, the generalization slopes are correlated to their low-temperature error regardless of significant architectural differences. This indicates that, for this benchmark, the root mean squared errors (RMSEs) at higher temperatures can be estimated from the test error at 300 K even without evaluating the model at the other test sets.
The exceptions to this rule are the two ANI models ("ANI-2x" and "ANI-pretrained"), which were pre-trained to the 8.9 million configurations from the ANI-2x dataset [75]. As seen in other fields of deep learning [76], the pre-trained models extrapolate better than all other models, though fine-tuning on 3BPA ("ANI-pretrained") leads to slightly worse extrapolation slope. These results suggest that: (1) more diverse datasets may be required for assessing the extrapolation and generalization capacity of a model; and (2) pre-training on large datasets may be required to create universal NNIPs [77], given that pre-trained ANI models were able to escape the scaling relation seen in Fig. 1b.
Although the scaling relation can estimate the extrapolation slope within reasonable bounds, it is unable to recover trends within the same architecture in the low-error regime. As best-performing models often have differences of force errors smaller than 5 meV/A in the 300 K test set, it is not clear whether their extrapolation behavior can be accurately recovered from the scaling relation. Indeed, Fig. 1c shows the correlation between the forces RMSE computed on the 300 K split and all splits (300, 600, and 1200 K, "high T") for several NequIP and MACE models with different hyperparameters and training methods (see Tables S3 and S6), but similar testing error at the 300 K split. Although there is a positive correlation between the two RMSE metrics, the dispersion of points indicates that models with nearly the same RMSE for the 300 K data split show discrepancies in error above 10 meV/A when all splits are taken into account.
This scenario is aggravated when molecular dynamics (MD) simulations are used to test the extrapolation power of the NNIPs. When measuring the average simulation times for each of the models (see Appendix B for details on the MD simulations), no clear correlation is obtained between the error on the 300 K split of 3BPA and the average (physically meaningful) simulation length (Fig. 1c). This motivates the creation of a metric that captures robustness trends in NNIPs.
### Training insights derived from loss landscapes
Beyond data regularization, architectural and training choices strongly influence generalization ability of NN models. Relevant aspects include model initialization, hyperparameter optimization, choice of optimizer, batch sizes, and many others. Although the process of training NNs strongly affects their extrapolation behavior, good NN models systematically outperform their counterparts by optimizing to better local minima in the LL [54]. Thus, using these insights, we propose that _loss landscapes of NNIP models can predict their generalization ability towards unseen data despite using only the training data_. Correlations between robust generalization and loss sharpness have been observed for other NN models in the literature [55, 51], but not yet explored in the context of NNIPs.
To first verify if qualitative insights could be derived from LLs in the context of NNIPs, we investigated the behavior of the loss function around the optimized minima of the NNIP models trained to the non-noisy 3BPA dataset. To ensure the LL visualizations were statistically meaningful, we sampled 20 different orthogonal directions for each set of parameters and models, and interpolated them using the filter-normalized method described in Sec. 3.1 (see also Fig. 2a). Then, we compared the LLs of the NNIP models using 2D visualizations, as often done for NN classifiers [58]. Qualitative inspection of the 2D LLs in Fig. 2b reveals the presence of weight degeneracies in the prediction of energies in models (see also Fig. S14 for another example in the ANI-Al dataset). This "valley-like" landscape represents a
subspace of weights leading to similar accuracy in energy [78], and reflects the interplay between energy and forces during training. These results agree with the literature regarding LLs of over-parameterized models [79], as well as the notion that physical systems often result in so-called "sloppy" models [80; 81], and can improve trainability and interpolation [82].
Qualitative analysis of LLs also explain other factors typically found as heuristics of NNIP training. For example, the energy/forces coefficients in the final loss are often defined from hyperparameter optimization [23; 83] and have ratios varying from 1:10 to 1:100 for energy:forces RMSE. Nevertheless, the success of these higher ratios can be justified from the perspective of the LL. In Fig. 2b, we show how a higher weight on force losses leads to LLs with less saddle points around the optimized minima, thus favoring training. Although energy and forces are related and completely disentangling their effects is not achievable, the interpolation in Fig. 2b shows how mixing both can lead to better optimization landscapes for NNIPs. Based on these results, an effective training regimen would start with a relatively large weight for forces loss for a fast (and smoother) optimization of forces. Once the force errors are reasonably converged, the weight can be decreased until a desired threshold in energy errors is achieved. Similar strategies with weight cycling and scheduling -- e.g., starting with an energy:force loss weighting of 1:10, but then switch to 1000:1 in the later stages of training -- can also be effective.
One limitation of visualizations of the high-dimensional LL is that different directions in weight perturbation may lead to similar losses. This results in only slight variations in landscape topography with different random directions, as noted in Fig. 2. This may be related to the dominance of specific layers of the model in the filter normalization technique. Figure S13 shows how the distribution of weights is non-uniform, suggesting some layers have higher sensitivity to weight perturbation than others. As the filter normalization displaces the model weights along random directions with magnitude proportional to the norm of each filter and layer, parameters with higher weights may influence certain regions of the loss landscapes. An exception to this point is when the parameters are intertwined with functions embedded in the architecture, such as the trainable Bessel functions in NequIP. As shown in Fig. S6, freezing certain high-magnitude weights when generating the LLs can help flatten the landscape and remove spurious minima, emphasizing the importance of proper regularization and training regimen taking these effects into account (e.g., separate learning rates for certain layers in the model).
### Loss landscapes predict extrapolation trends in NequIP
In the context of NNIPs, robust generalization has important ramifications in two distinct areas: the stability of the model in production (e.g., MD simulations), and its data efficiency during training. To probe the first of these features, we trained NequIP models using various choices of model architecture and optimization techniques (see Table S1) in the low-temperature split of the 3BPA dataset. Then, we performed MD simulations in the NVT ensemble with a high temperature of 1600 K (Appendix B) to ensure all models were extrapolating well beyond their training split. Furthermore, as the 3BPA benchmark already provides geometries sampled from simulations up to 1200 K, the MD
Figure 2: **a**, A schematic describing the process of generating loss landscapes. Starting at the origin with a trained model, landscapes are constructed by performing a grid-sampling along a random filter-normalized direction [58] in parameter space up to a chosen maximum distance. In order to ensure that that results were consistent regardless of the choice of random sampling direction, landscapes were averaged over multiple random directions, \(\delta_{1}\dots\delta_{n}\), with \(n=20\) usually showing only slight variations in landscape topography. **b**, 2D LLs for a MACE model trained to the 3BPA dataset, generated by sampling a plane defined by two random directions. The model was trained using \(E_{w}=1\) and \(F_{w}=1000.0\), but the final landscape was re-weighted for the purpose of this figure using a fixed value of \(E_{w}=1.0\) and \(F_{w}\) increasing from 0.0 to 10.0 to demonstrate the effects of different weights on optimization.
trajectories at 1600 K could provide information beyond the 1200 K splits provided by the dataset. For each model in the study, 30 MD trajectories were simulated to obtain reasonable statistics on the model behavior, with simulation timelengths of up to 6 ps and a timestep of 1 fs. In principle, a model's ability to extrapolate beyond the training set can be assessed by the trajectories that behave in a physical way [34], as well as the average trajectory length until the model fails to extrapolate (Appendix B).
Using the results from MD simulations, we investigate whether LLs can predict the extrapolation behavior and trajectory stability of models. While the test performance at low-temperature samples is unable to perform such predictions (Sec. 4.1), the test error at high-temperature samples is still expected to be a reasonable predictor of such stability. Nevertheless, whereas high-temperature samples are available in the 3BPA dataset, out-of-domain data is often not available for an arbitrary material system. In addition to being expensive to generate using ground-truth methods, sampling new configurations is rarely performed in an exhaustive way. Thus, the major advantage of extrapolation tests based on LLs is their reliance only on the training dataset. To quantify the sharpness of the LLs [55], we compute the loss entropy as described in Sec. 3.2, with \(T_{E}\) and \(T_{F}\) taken to be reasonable "room temperature" values of the energy/force RMSEs, respectively (\(T_{E}=4\) meV/atom, \(T_{F}=40\) meV/A). The weight \(\alpha\) was set to \(0.2\) to resemble the higher force weighting used during training without ignoring the contributions of energy LLs in model stability. Nevertheless, a sensitivity analysis of 5 with respect to these parameters showed that results remained consistent around these error ranges, as long as unreasonable temperature values were not used (see Fig. S4).
The relationship between the model/training parameters, the MD stability, and LLs entropy are shown in Fig. 3 (see also Table S2). The analysis is grouped into four experiments (columns in Fig. 3) to isolate the effects of specific portions of the training procedure and model architecture, thus uncovering useful trends for NNIP practitioners. For example, considering only the distributions of time to failure from MD simulations in Fig. 3b, it can be seen that rescaling the energies predicted by the model (models "no rescaling" vs. "rescaling") greatly improves the stability of the model, especially when the Bessel weights used by the radial basis are trainable. This behavior is reflected in the energy and forces LLs (Fig. 3a), with the LL of the model without rescaling showing considerable sharpness compared to the models using rescaled energies. Quantitative trends are also obtained when the loss entropy and test RMSE at all splits in the 3BPA dataset are computed (Fig. 3c, first column, and Table S3). Models with rescaling showed higher average simulation time with physical trajectories, as well as increased entropy and lower forces RMSE.
When comparing how the training regime can change the extrapolation behavior of NequIP, the results show that the AMSGrad variant of the Adam optimizer [84] leads to consistent improvements in the model extrapolation both for the 2-layer and the 5-layer model (Fig. 3b, second and third column) compared to the baseline, which does not use AMSGrad. The improvements of simulation quality are particularly pronounced in the 5-layer models, where the model trained using AMSGrad shows significant improvements in simulation stability compared to the baseline despite not using trainable Bessel functions (Table S1). On the other hand, the exponential moving average (EMA) appears to degrade the extrapolation performance of the NNIIPs, often leading to sharper LLs if used in isolation and with constant learning rates. These trends are reflected in the extrapolation metrics (Fig. 3c). Although the loss entropy underperforms compared to the forces RMSE in the case of the 2-layer models, the trend is correctly captured for the 5-layer model, where the model trained with AMSGrad showed both higher loss entropy and higher MD stability.
Finally, NequIP models can be compared according to the number of message-passing layers. Using a fixed training regime, a higher number of layers showed pronounced improvement both in the stable simulation time (Fig. 3b), as well as in the forces RMSE and loss entropy (Fig. 3c), with the trend remaining consistent across these models. Interestingly, the improvement in loss entropy for the 5-layer model is reflected mostly in the energy LL rather than the force LL (Fig. 3a). Although the energy is not required when integrating the equations of motion in the NVT ensemble during an MD simulation, this improvement in LL may reflect the model's higher generalization capacity, and thus wider minima. This phenomenon can be explained by recognizing that the out-of-domain data often results in a horizontally shifted version of the loss landscape [46]. To explain this effect, we computed the LL of NequIP models with respect to the test set instead of the training set (see Fig. S5). As expected, the test LL shows higher errors compared to the train LL, as seen in the vertical shift of the curves. However, while the forces LL only undergo a vertical shift, test energy LLs also undergo a horizontal shift, indicating that the optimal model for the training set is not necessarily optimal for predicting energies of the testing set. Furthermore, as the forces loss is often derived from the energy in NNIIPs, the propagation of these mismatches may be responsible for degradation in extrapolation performance. Thus, in general, the results in Fig. 3 show that architectures and optimization strategies which result in loss landscapes with higher entropy (i.e., flatter landscapes) tend to demonstrate improved stability in MD simulations that sample out-of-domain configurations.
### Loss landscapes predict extrapolation trends in MACE
To confirm that the results from Sec. 4.3 could be extended beyond NequIP, we performed a similar study using the MACE framework. Given the differences in architecture and available code, MACE has different hyperparameters and
optimizer choices than NequIP (see Table S4). Nevertheless, a similar study for MACE reproduces the trends seen for NequIP, as shown in Fig. 4.
As seen in Sec. 4.3, model stability can be greatly improved by using rescaling and AMSGrad. Also for MACE, EMA appears to lower the time-to-failure if used in isolation, with similar trends in out-of-domain RMSE, loss entropy, and MD stability seen in the NequIP case (Fig. 4). In addition, we also explored the effects of stochastic weight averaging (SWA) [85] and weight cycling (WC) (column 2 of Fig. 4) implemented in the MACE code. Consistent with results from the previous studies, the effect of the stochastic weight averaging on the MSE is also reduced by the factor \(\sim 10^{-10}\).
Figure 3: Analysis of LLs and MD simulation stability results for NequIP models while varying: (column 1) rescaling and Bessel weights, (column 2) optimizer settings using a model with two interaction blocks or (column 3) five interaction blocks, and (column 4) the number of interaction blocks. **a**, LLs for each model and **b**, distributions of time to failure computed over 30 MD simulations. **c**, Time to failure compared to (top) the entropy of the loss landscape (as defined in Sec. 3.2) or the forces RMSE on the high-temperature test sets. The horizontal line, box, and whiskers in **b** depict the median, interquartile range, and points within 150% of the interquartile range of the distribution. Isolated diamonds are points outside of this range.
from the use of SWA for image classification tasks [85], SWA + WC is shown to improve model generalizability, leading to higher simulation times and flatter loss landscapes. Interestingly, the model trained with SWA + WC exhibits similar force RMSE in high-temperature samples compared to the baseline, but a higher loss entropy and higher MD stability (Fig. 4c). As SWA flattens the loss landscape by design [85], implementing this strategy in other NNIP models may be a low-cost modification for improving the generalization capacity of different architectures. The use of WC can also help the optimization process (Sec. 4.2) and lead to better energy landscapes overall.
Figure 4: Analysis of loss landscapes and MD simulation stability results for MACE models while varying: (column 1) the use of rescaling; (column 2) optimizer settings using a model with a fixed body order, \(v=2\), and symmetry order, \(L=3\); (column 3) the symmetry order of the edge features with a fixed body order of 2; and (column 4) the body order with a fixed symmetry order of 3. **a**, Loss landscapes for each model. **b**, distributions of time-to-failure computed over 30 MD simulations. **c**, Time-to-failure compared to (top) the entropy of the loss landscape (as defined in Sec. 3.2) or the forces RMSE on all test sets of the 3BPA dataset. The horizontal line, box, and whiskers in **b** depict the median, interquartile range, and points within 150% of the interquartile range of the distribution. Isolated diamonds are points outside of this range.
For the MACE study, we also analyzed the relationship between the bond order of the model (column 4) and their stability in production MD simulations. In this case, the loss entropy recovers the trend in MD stability better than the forces RMSE (Fig. 4c, column 4). Interestingly, despite the higher errors of the \(v=1,L=3\) model compared to its \(v=2,L=3\) counterpart, the former still exhibits a higher average time to failure. This observation can be traced back to the flatter energy LL (Fig. 4a, column 4), as also seen in the case of NequIP.
Finally, the models were compared according to the symmetry order of the edge features (column 3). In general, increasing the value of \(L\) from \(L=0\) to \(L=2\) led to more stable simulations, although the model with \(L=3\) shows a degradation of the production behavior. Nevertheless, the LLs do not follow the expected trends explained so far in this work. We believe the filter normalization technique discussed in Sec. 3.1 cannot provide comparable LLs upon changes of \(L\) in MACE. As this parameter affects the tensor order of some model components [25], the number of parameters in these particular filters grows with \(\mathcal{O}(N^{L})\), which may be affecting the filter normalization. In contrast, adding message-passing layers with similar numbers of parameters, as in NequIP, allows the filter normalization technique to create comparable loss landscapes. This result shows some limitations of the loss entropy when comparing models with different architectures or hyperparameters, and may require different strategies for computing them in the future.
### Loss landscapes and data efficiency
As extrapolation power and data efficiency are conceptually related, a natural extension of Secs. 4.3 and 4.4 is to verify whether the loss landscape entropy can also be used to predict the data efficiency of a model during training. As with many other applications of deep learning, generating training data is often one of the most costly steps in the development process. Especially considering the "denoising" effects of NNIPs demonstrated in Sec. 4.1, identifying training techniques and model architectures that lead to more data-efficient training has the potential to greatly reduce the computational cost of building new NNIPs.
To test whether loss entropy can predict the data efficiency of models, we computed the learning curves for NequIP and MACE models with the hyperparameters selected in Sections 4.3 and 4.4. This is achieved by training models with the specified training parameters and architectures to the same subsets of the 3BPA training set using only 25, 125, 250, or 500 samples from the 300 K split (Tables S3 and S6). Following previous work [23], the slopes of the learning curves were then computed by fitting the line \(\log n=m\log\varepsilon+b\) to the number of training samples \(n\) and the force RMSEs \(\varepsilon\) calculated using all test splits (300, 600, and 1200 K), and comparing the slopes \(m\). Consistent with the results from Sections 4.3 and 4.4 (combined in Figures 5a,b), the high-temperature force RMSEs (i.e., the points from the learning curves using \(n=500\)) are good predictors of the learning curve slope. In addition, Figs. 5c,d show that the loss entropy also has a high correlation with the slope of the learning curve despite being computed using only the training set. As discussed in Sec. 4.4, these correlations are less explicit for the case of MACE models, and thus show that none of the metrics is truly universal for predicting stability in simulations and learning curve slopes. Nevertheless, these results further demonstrate that loss landscapes provide information on the extrapolation behavior of NNIPs despite being derived only from the training set.
## 5 Conclusion
In this work, we motivate the need for additional metrics beyond RMSE by showing that in-domain errors fail to predict model stability in the high-accuracy regime despite large-scale trends in extrapolation behavior and NNIP robustness to noise. We propose the use of loss entropy as a metric for quantifying the extrapolation behavior, and demonstrate that it correlates well with out-of-distribution error and stability in production simulations. Using large studies with NequIP and MACE, we show how models containing flatter loss landscapes exhibit better extrapolation behavior, and how different training parameters can be used to achieve them. For example, rescaling, AMSGrad, and SWA were shown to increase the loss entropy and MD stability, and may be important tools when training NNIP models. Similarly, models with similar test error can be distinguished by their energy loss landscape, with models displaying broader minima performing better in extrapolation tasks. Future studies can address shortcomings of loss landscape visualizations in NNIP by analyzing how filter-normalization can be made more suitable for NNIP architectures, or to better isolate the effects of key architectural changes in the loss. Nevertheless, these results can better inform the development of new model architecture and optimization strategies in NNIPs, facilitating their use in general materials simulation.
## Code Availability
The package ip_explorer and the additional code/data used to reproduce the results of this paper will be made available after internal review at LLNL. This preprint will be updated with the relevant links. Loss landscape calculations were performed using the the code from the public package [https://github.com/marcellodebernardi/](https://github.com/marcellodebernardi/)
loss-landscapes. Training codes for NequIP and MACE are available from their original authors as described in Appendix A.
## Data Availability
The datasets used to train the models in this work were obtained directly from their original sources. For convenience, we provide the links to each source here: [https://github.com/davkovacs/BOTNet-datasets/tree/main/dataset_3BPA](https://github.com/davkovacs/BOTNet-datasets/tree/main/dataset_3BPA) (3BPA) and [https://github.com/atomistic-ml/ani-al](https://github.com/atomistic-ml/ani-al) (ANI-AI).
## Author Contributions
**Joshua Vita:** Methodology, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization. **Daniel Schwalbe-Koda:** Conceptualization, Methodology, Software, Validation, Investigation, Data Curation, Writing - Original Draft, Writing - Review & Editing, Visualization, Supervision.
## Acknowledgments
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, funded by the Laboratory Directed Research and Development (LDRD) Program at LLNL under project tracking code 22-ERD-055. The authors thank Vincenzo Lordi, the Quantum Simulations Group at LLNL, and Simon Batzner for the discussions. We also thank Ilyes Batatia for the support with the MACE code.
Manuscript released as LLNL-JRNL-845001-DRAFT.
Figure 5: Loss landscape entropy and high-temperature force test RMSEs compared to **a,b** MD time-to-failure or **c,d** learning curve slope for **a,c** NequIP and **b,d** MACE models. The points. Learning curve slopes were computed as described in Sec. 4.5. Colors for points in **a** and **b** were chosen to match the colors in Fig. 3 and Fig. 4 respectively. Lines in **c** and **d** represent linear fits to the data. |
2303.15919 | Fully Hyperbolic Convolutional Neural Networks for Computer Vision | Real-world visual data exhibit intrinsic hierarchical structures that can be
represented effectively in hyperbolic spaces. Hyperbolic neural networks (HNNs)
are a promising approach for learning feature representations in such spaces.
However, current HNNs in computer vision rely on Euclidean backbones and only
project features to the hyperbolic space in the task heads, limiting their
ability to fully leverage the benefits of hyperbolic geometry. To address this,
we present HCNN, a fully hyperbolic convolutional neural network (CNN) designed
for computer vision tasks. Based on the Lorentz model, we generalize
fundamental components of CNNs and propose novel formulations of the
convolutional layer, batch normalization, and multinomial logistic regression.
{Experiments on standard vision tasks demonstrate the promising performance of
our HCNN framework in both hybrid and fully hyperbolic settings.} Overall, we
believe our contributions provide a foundation for developing more powerful
HNNs that can better represent complex structures found in image data. Our code
is publicly available at https://github.com/kschwethelm/HyperbolicCV. | Ahmad Bdeir, Kristian Schwethelm, Niels Landwehr | 2023-03-28T12:20:52Z | http://arxiv.org/abs/2303.15919v3 | # Hyperbolic Geometry in Computer Vision: A Novel Framework for Convolutional Neural Networks
###### Abstract
Real-world visual data exhibit intrinsic hierarchical structures that can be represented effectively in hyperbolic spaces. Hyperbolic neural networks (HNNs) are a promising approach for learning feature representations in such spaces. However, current HNNs in computer vision rely on Euclidean backbones and only project features to the hyperbolic space in the task heads, limiting their ability to fully leverage the benefits of hyperbolic geometry. To address this, we present HCNN, the first fully hyperbolic convolutional neural network (CNN) designed for computer vision tasks. Based on the Lorentz model, we generalize fundamental components of CNNs and propose novel formulations of the convolutional layer, batch normalization, and multinomial logistic regression. Experimentation on standard vision tasks demonstrates the superiority of our HCNN framework and the Lorentz model in both hybrid and fully hyperbolic settings. Overall, we believe our contributions provide a foundation for developing more powerful HNNs that can better represent complex structures found in image data. Our code is publicly available at [https://github.com/kschwethelm/HyperbolicCV](https://github.com/kschwethelm/HyperbolicCV).
## 1 Introduction
Representation learning is a fundamental aspect of deep neural networks, as obtaining an optimal representation of the input data is crucial. While Euclidean geometry has been the traditional choice for representing data due to its intuitive properties and well-defined vector space, recent research has highlighted the advantages of hyperbolic geometry in describing hierarchical structures found in many datasets. Specifically, hyperbolic manifolds allow for distances to scale exponentially with respect to the radius, which matches the exponential scaling of tree distances between hierarchy nodes, preventing spatial distortion and information loss [47]. In addition, research has found that even the natural spatial representations in the human brain exhibit a hyperbolic geometry [53].
Leveraging this better representative capacity, hyperbolic neural networks (HNNs) have demonstrated superior performance compared to Euclidean models in many natural language processing (NLP) and graph embedding tasks [42]. However, hierarchical structures are not limited to textual and graph-structured data but can also be found in images. The notion of hierarchy in images is particularly established through the concept of part-whole relationships within object representations and classes. In addition, Khrulkov et al. [22] have found high hyperbolicity in image datasets, where the hyperbolicity measure represents the degree of innate hierarchy between semantic embeddings and can highlight the potential benefits of using HNNs for representation learning.
In light of these findings, recent works have begun integrating hyperbolic geometry into vision architectures. Specifically, they rely on the Poincare ball and the Lorentz model as descriptors of hyperbolic space and formalize hyperbolic translations of neural network layers. This is challenging due to ill-defined hyperbolic analogs of, e.g., addition, multiplication, and statistical measures.
Currently, most HNN components are only available in the Poincare ball as it supports the gyrovector space with basic vector operations. However, due to its hard numerical constraint, the Poincare ball is more susceptible to numerical instability than the Lorentz model [36], which motivates introducing the missing layers for the Lorentz model. Moreover, HNNs in computer vision have been limited to hybrid architectures that can not fully leverage the advantages of hyperbolic geometry as they rely on Euclidean backbones to learn hyperbolic representations. Until now, fully hyperbolic architectures are missing in computer vision, although prevalent in NLP and graph applications [42].
In this work, we present HCNN, the first fully hyperbolic framework for vision tasks. We generalize the ubiquitous convolutional neural network (CNN) architecture to the Lorentz model and present novel hyperbolic formulations of the convolutional layer, batch normalization, and multinomial logistic regression. Our methodology is general, and we show that our components can be easily integrated into existing architectures. This is done through experiments based on direct translations of Euclidean architectures. Our contributions then become three-fold:
1. We propose the first fully hyperbolic CNN for image data, dubbed HCNN, introducing the fully hyperbolic setting in computer vision.
2. We provide missing Lorentzian formulations of the convolutional layer, batch normalization, and multinomial logistic regression, further filling the gap between HNNs in the Lorentz model and the Poincare ball.
3. We empirically demonstrate the superior performance of HCNNs in experiments on standard vision tasks, including image classification and generation.
## 2 Related work
Hyperbolic image embeddingsPrevious research on HNNs in computer vision has mainly focused on combining Euclidean backbones and hyperbolic embeddings. This approach involves projecting Euclidean embeddings onto the hyperbolic space in the task heads and designing task-related objective functions based on hyperbolic geometry. Such simple hybrid architectures have been proven effective in various vision tasks like recognition [16; 22; 31], segmentation [1; 20], reconstruction/generation [35; 38; 40; 44], and metric learning [10; 51]. However, Guo et al. [16] have shown that learning a mixture of Euclidean and hyperbolic features can exacerbate gradient vanishing, and it remains unclear whether these hybrid models can fully exploit the properties of hyperbolic geometry. In contrast, our HCNN naturally learns latent hyperbolic feature representations in every layer, mitigating these issues. We also forgo the typically used Poincare ball in favor of the Lorentz model, as it offers better stability and optimization properties [36].
Fully hyperbolic neural networksDesigning fully hyperbolic HNNs requires generalizing all Euclidean network components to hyperbolic geometry. Notably, Ganea et al. [12] and Shimizu et al. [49] utilized the Poincare ball and the gyrovector space to generalize various layers, including fully-connected, convolutional, and attention layers, as well as operations like split, concatenation, and multinomial logistic regression (MLR). Researchers have also designed components in the Lorentz model [5; 11; 39; 44], but crucial components for vision, like the standard convolutional layer and the MLR classifier, are still missing. Among the hyperbolic layer definitions, fully hyperbolic neural networks have been built for various tasks in NLP and graph applications [42]. However, no fully
Figure 1: In contrast to hybrid HNNs that use a Euclidean CNN for feature extraction, our HCNN learns features in hyperbolic spaces in every layer, fully leveraging the benefits of hyperbolic geometry. This leads to better image representations and performance.
hyperbolic architecture has yet been utilized in computer vision. Our work provides formulations for missing components in the Lorentz model, allowing for the first fully hyperbolic vision CNNs.
Normalization in HNNsThere are few attempts at translating standard normalization layers to the hyperbolic setting. To the best of our knowledge, there is only a single viable normalization layer for HNNs, i.e., the general Riemannian batch normalization [33]. However, this method is not ideal due to the slow iterative computation of the Frechet mean and the arbitrary re-scaling operation that is not based on hyperbolic geometry. In this work, we propose an efficient batch normalization algorithm founded in the Lorentz model, which utilizes the Lorentzian centroid [27] and a mathematically motivated re-scaling operation.
Numerical stability of HNNsThe exponential growth of the Lorentz model's volume with respect to the radius can introduce numerical instability and rounding errors in floating-point arithmetic. This requires many works to rely on 64-bit precision at the cost of higher memory and runtime requirements. To mitigate this, researchers have introduced feature clipping and Euclidean reparameterizations [16; 35; 36]. We adopt these approaches in HCNN, allowing us to run under 32-bit floating point arithmetic, reducing the computational cost.
## 3 Background
This section summarizes the mathematical background of hyperbolic geometry [3; 45]. The \(n\)-dimensional hyperbolic space \(\mathbb{H}_{K}^{n}\) is a smooth Riemannian manifold \((\mathcal{M}^{n},\mathfrak{g}_{\boldsymbol{x}}^{K})\) with constant negative curvature \(K<0\), where \(\mathcal{M}^{n}\) and \(\mathfrak{g}_{\boldsymbol{x}}^{K}\) represent the manifold and the Riemannian metric, respectively. There are multiple isometrically equivalent models of hyperbolic geometry. We employ the Lorentz model because of its numerical stability and its simple exponential/logarithmic maps and distance functions. Additionally, we use the Poincare ball for baseline implementations. Both hyperbolic models provide closed-form formulae for manifold operations, including distance measures, exponential/logarithmic maps, and parallel transportation. They are detailed in Appendix A.
Lorentz modelThe \(n\)-dimensional Lorentz model \(\mathbb{L}_{K}^{n}=(\mathcal{L}^{n},\mathfrak{g}_{\boldsymbol{x}}^{K})\) models hyperbolic geometry on the upper sheet of a two-sheeted hyperboloid \(\mathcal{L}^{n}\), with origin \(\overline{\boldsymbol{0}}=[\sqrt{-1/K},0,\cdots,0]^{T}\) and embedded in \((n+1)\)-dimensional Minkowski space (see Figure 2). Based on the Riemannian metric \(\mathfrak{g}_{\boldsymbol{x}}^{K}=\mathrm{diag}(-1,1,\ldots,1)\), the manifold is defined as
\[\mathcal{L}^{n}:=\{\boldsymbol{x}\in\mathbb{R}^{n+1}\mid\langle\boldsymbol{x}, \boldsymbol{x}\rangle_{\mathcal{L}}=\frac{1}{K},\;x_{t}>0\}, \tag{1}\]
with the Lorentzian inner product
\[\langle\boldsymbol{x},\boldsymbol{y}\rangle_{\mathcal{L}}:=-x_{t}y_{t}+ \boldsymbol{x}_{s}^{T}\boldsymbol{y}_{s}=\boldsymbol{x}^{T}\mathrm{diag}(-1, 1,\cdots,1)\boldsymbol{y}. \tag{2}\]
When describing points in the Lorentz model, we inherit the terminology of special relativity and call the first dimension the _time component_\(x_{t}\) and the remaining dimensions the _space component_\(\boldsymbol{x}_{s}\), such that \(\boldsymbol{x}\in\mathbb{L}_{K}^{n}=[x_{t},\boldsymbol{x}_{s}]^{T}\) and \(x_{t}=\sqrt{||\boldsymbol{x}_{s}||^{2}-1/K}\).
Poincare ballThe n-dimensional Poincare ball \(\mathbb{B}_{K}^{n}=(\mathcal{B}^{n},\mathfrak{g}_{\boldsymbol{x}}^{K})\) is defined by \(\mathcal{B}^{n}=\{\boldsymbol{x}\in\mathbb{R}^{n}\mid-K||\boldsymbol{x}||^{2} <1\}\) and the Riemannian metric \(\mathfrak{g}_{\boldsymbol{x}}^{K}=(\lambda_{\boldsymbol{x}}^{K})^{2}\mathbf{ I}_{n}\), where \(\lambda_{\boldsymbol{x}}^{K}=2(1+K||\boldsymbol{x}||^{2})^{-1}\). It describes the hyperbolic space by an open ball of radius \(\sqrt{-1/K}\), see Figure 2.
## 4 Fully hyperbolic CNN (HCNN)
Our HCNN framework aims to give way to building vision models that can fully leverage the advantages of hyperbolic geometry by learning features in hyperbolic spaces. For this, we generalize Euclidean CNN components to the Lorentz model, yielding one-to-one replacements that can be integrated into existing architectures. In the following, we first define the cornerstone of HCNNs, i.e., the Lorentz convolutional layer, including its transposed variant. Then, we introduce the Lorentz batch normalization algorithm and the MLR classifier. Finally, we generalize the residual connection and non-linear activation.
Figure 2: Comparison of Lorentz and Poincaré model.
### Lorentz convolutional layer
Hyperbolic feature mapsThe convolutional layer applies vector operations to an input feature map containing the activations of the previous layer. In Euclidean space, arbitrary numerical values can be combined to form a vector. Therefore, it is not required to strictly determine which feature map dimension holds feature vectors. However, in the Lorentz model, not all possible value combinations represent a point that can be processed with hyperbolic operations (\(\mathbb{L}_{K}^{n}\subset\mathbb{R}^{n+1}\)).
We propose using channel-last feature map representations throughout HCNNs and adding the Lorentz model's time component as an additional channel dimension. This defines a hyperbolic feature map as an ordered set of \(n\)-dimensional hyperbolic vectors, where every spatial position contains a vector that can be combined with its neighbors. Additionally, it offers a nice interpretation where an image is an ordered set of color vectors, each describing a pixel.
Formalization of the convolutional layerWe define the convolutional layer as an affine transformation between a linearized kernel and a concatenation of the values in its receptive field, following Shimizu et al. [49]. Then, we generalize this definition by replacing the Euclidean operators with their hyperbolic counterparts in the Lorentz model.
Given a hyperbolic input feature map \(\mathbf{x}=\{\mathbf{x}_{h,w}\in\mathbb{L}_{K}^{n}\}_{h,w=1}^{H,W}\) as an ordered set of \(n\)-dimensional hyperbolic feature vectors, each describing image pixels, the features within the receptive field of the kernel \(\mathbf{K}\in\mathbb{R}^{m\times n\times\hat{H}\times\hat{W}}\) are \(\{\mathbf{x}_{h^{\prime}+\delta\hat{h},w^{\prime}+\delta\hat{w}}\in\mathbb{L}_{K} ^{n}\}_{\hat{h},\hat{w}=1}^{\hat{H},\hat{W}}\), where \((h^{\prime},w^{\prime})\) denotes the starting position and \(\delta\) is the stride parameter. Now, we define the Lorentz convolutional layer as
\[\mathbf{y}_{h,w}=\text{LFC}(\text{HCat}(\{\mathbf{x}_{h^{\prime}+\delta\hat{h},w^{ \prime}+\delta\hat{w}}\in\mathbb{L}_{K}^{n}\}_{\hat{h},\hat{w}=1}^{\hat{H}, \hat{W}})), \tag{3}\]
where HCat denotes the concatenation of hyperbolic vectors, and LFC denotes a Lorentz fully-connected layer performing the affine transformation and parameterizing the kernel and bias, respectively. Additionally, we implement padding using origin vectors, the analog of zero vectors in hyperbolic space.
Finally, our formulation generalizes arbitrary dimensional convolutional layers with little modification to the 2-dimensional case presented here. However, it remains essential to predetermine the structure of hyperbolic feature maps, e.g., using our channel-last format.
Extension to the transposed settingThe transposed convolutional layer is usually used in encoder-decoder architectures for up-sampling. Given a hyperbolic convolutional layer, it is straightforward to generalize to hyperbolic space. A convolutional layer carries out a transposed convolution when the correct local connectivity is established by inserting zeros at certain positions. Specifically, when stride \(s>1\), then \(s-1\) zero vectors are inserted between the features. In addition, the input feature map is always implicitly padded on each side. We refer to Dumoulin and Visin [9] for illustrations. Under this relationship, the Lorentz transposed convolutional layer is a Lorentz convolutional layer with changed connectivity through origin padding.
### Lorentz batch normalization
Given a batch \(\mathcal{B}\) of \(m\) features \(\mathbf{x}_{i}\), the traditional batch normalization algorithm [21] calculates the mean \(\mathbf{\mu}_{\mathcal{B}}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}_{i}\) and variance \(\mathbf{\sigma}_{\mathcal{B}}^{2}=\frac{1}{m}\sum_{i=1}^{m}(\mathbf{x}_{i}-\mathbf{\mu}_{ \mathcal{B}})^{2}\) across the batch dimension. Then, the features are _re-scaled_ and _re-centered_ using a parameterized variance \(\mathbf{\gamma}\) and mean \(\mathbf{\beta}\) as follows
\[\text{BN}(\mathbf{x}_{i})=\mathbf{\gamma}\odot\frac{\mathbf{x}_{i}-\mathbf{\mu}_{\mathcal{B}}} {\sqrt{\mathbf{\sigma}_{\mathcal{B}}^{2}+\epsilon}}+\mathbf{\beta}. \tag{4}\]
At test time, running estimates approximate the batch statistics. They are calculated iteratively during training: \(\mathbf{\mu}_{t}=(1-\eta)\mathbf{\mu}_{t-1}+\eta\mathbf{\mu}_{\mathcal{B}}\) and \(\mathbf{\sigma}_{t}^{2}=(1-\eta)\mathbf{\sigma}_{t-1}^{2}+\eta\mathbf{\sigma}_{\mathcal{B}} ^{2}\), with \(\eta\) and \(t\) denoting momentum and the current iteration, respectively. We generalize batch normalization to the Lorentz model using the Lorentzian centroid and the parallel transport operation for re-centering, and the Frechet variance and straight geodesics at the origin's tangent space for re-scaling.
Re-centeringTo re-center hyperbolic features, it is necessary to compute a notion of mean. Usually, the Frechet mean is used [33], which minimizes the expected squared distance between a set of points in a metric space [43]. Generally, the Frechet mean must be solved iteratively, massively slowing down training. To this end, we propose to use the centroid with respect to the squared Lorentzian distance, which can be calculated efficiently in closed form [27]. The weighted Lorentzian centroid, which solves \(\min_{\mathbf{\mu}\in\mathbb{L}_{K}^{n}}\sum_{i=1}^{m}\nu_{i}d_{\mathcal{L}}^{2}( \mathbf{x}_{i},\mathbf{\mu})\), with \(\mathbf{x}_{i}\in\mathbb{L}_{K}^{n}\) and \(\nu_{i}\geq 0,\sum_{i=1}^{m}\nu_{i}>0\), is given by
\[\mathbf{\mu}=\frac{\sum_{i=1}^{m}\nu_{i}\mathbf{x}_{i}}{\sqrt{-K}\,||\, \sum_{i=1}^{m}\nu_{i}\mathbf{x}_{i}||_{\mathcal{L}}}. \tag{5}\]
In batch normalization, the mean is not weighted, which gives \(\nu_{i}=\frac{1}{m}\). Now, we shift the features from the batch's mean \(\mathbf{\mu}_{\mathbf{B}}\) to the parameterized mean \(\mathbf{\beta}\) using the parallel transport operation \(\mathrm{PT}_{\mathbf{\mu}_{\mathbf{\beta}}\rightarrow\mathbf{\beta}}^{K}(\mathbf{x})\). Parallel transport does not change the variance, as it is defined to preserve the distance between all points. Finally, the running estimate is updated iteratively using the weighted centroid with \(\nu_{1}=(1-\eta)\) and \(\nu_{2}=\eta\).
Re-scalingFor re-scaling, we rely on the Frechet variance \(\sigma^{2}\in\mathbb{R}^{+}\), defined as the expected squared Lorentzian distance between a point \(\mathbf{x}_{i}\) and the mean \(\mathbf{\mu}\), and given by \(\sigma^{2}=\frac{1}{m}\sum_{i=1}^{m}d_{\mathcal{L}}^{2}(\mathbf{x}_{i},\mathbf{\mu})\)[24]. In order to re-scale the batch, features must be moved along the geodesics connecting them to their centroid, which is generally infeasible to compute. However, geodesics intersecting the origin are very simple, as they can be represented by straight lines in tangent space \(\mathcal{T}_{\overline{\mathbf{0}}}\mathbb{L}_{K}^{n}\). This is reflected by the equality between the distance of a point to the origin and the length of its corresponding tangent vector (\(d_{\mathcal{L}}(\mathbf{x},\overline{\mathbf{0}})=||\log^{K}_{\overline{\mathbf{0}}}(\mathbf{ x})||\)). Using this property, we propose to re-scale features by first parallel transporting them towards the origin \(\mathrm{PT}_{\mu_{\mathbf{\beta}}\rightarrow\overline{\mathbf{0}}}^{K}(\mathbf{x})\), making the origin the new centroid and straightening the relevant geodesics. Then, a simple multiplication re-scales the features in tangent space. Finally, parallel transporting to \(\mathbf{\beta}\in\mathbb{L}_{K}^{n}\) completes the algorithm and yields the normalized features. The final algorithm of Lorentz batch normalization can be formalized as
\[\text{LBN}(\mathbf{x})=\exp_{\mathbf{\beta}}^{K}\left(\mathrm{PT}_{\mathbf{ \overline{\mathbf{0}}}\rightarrow\mathbf{\beta}}^{K}\left(\gamma\cdot\frac{\mathrm{ PT}_{\mathbf{\mu}_{\mathbf{\beta}}\rightarrow\overline{\mathbf{0}}}^{K}\left(\log^{K}_{\mathbf{\mu}_{ \mathbf{\beta}}}(\mathbf{x})\right)}{\sqrt{\sigma^{2}_{\mathcal{B}}}+\epsilon}\right) \right). \tag{6}\]
In our definition of hyperbolic feature maps, the centroid must be computed across multiple dimensions. For example, given a feature map \(\{\mathbf{x}_{h,w}\in\mathbb{L}_{K}^{n}\}_{h,w=1}^{H,W}\) for all \(m\) instances, we first compute the centroid of all hyperbolic vectors per instance and then the centroid across the batch dimension, i.e., the centroid of instance centroids. Similarly, the Frechet variance is computed for separate instances and then averaged across the batch dimension as it is in \(\mathbb{R}^{+}\). Furthermore, the running estimate of the Frechet variance is computed using a standard Euclidean running average.
### Lorentz MLR classifier
In this section, we consider the problem of classifying instances that are represented in the Lorentz model. A standard method for multi-class classification is multinomial logistic regression (MLR). Inspired by the generalization of MLR to the Poincare ball [12; 49] based on the distance to margin hyperplanes, we derive a formulation in the Lorentz model.
Hyperplane in the Lorentz modelAnalogous to Euclidean space, hyperbolic hyperplanes split the manifold into two half-spaces, which can then be used to separate instances into classes. The hyperplane in the Lorentz model is defined by a geodesic that results from the intersection of an \(n\)-dimensional hyperplane with the hyperboloid in the ambient space \(\mathbb{R}^{n+1}\)[6]. Specifically, for \(\mathbf{p}\in\mathbb{L}_{K}^{n}\) and \(\mathbf{w}\in\mathcal{T}_{\mathbf{p}}\mathbb{L}_{K}^{n}\), the hyperplane passing through \(\mathbf{p}\) and perpendicular to \(\mathbf{w}\) is given by
\[H_{\mathbf{w},\mathbf{p}}=\{\mathbf{x}\in\mathbb{L}_{K}^{n}\mid\langle\mathbf{w},\mathbf{x} \rangle_{\mathcal{L}}=0\}. \tag{7}\]
This formulation comes with the non-convex optimization condition \(\langle\mathbf{w},\mathbf{w}\rangle_{\mathcal{L}}>0\), which is undesirable in machine learning. To eliminate this condition, we use the Euclidean reparameterization
of Mishne et al. [36], which we extend to include the curvature parameter \(K\) in Appendix B.1. In short, \(\mathbf{w}\) is parameterized by a vector \(\mathbf{\overline{z}}\in\mathcal{T}_{\mathbf{\overline{0}}}\mathbb{L}_{K}^{n}=[0,a\mathbf{ z}/||\mathbf{z}||]\), where \(a\in\mathbb{R}\) and \(\mathbf{z}\in\mathbb{R}^{n}\). As \(\mathbf{w}\in\mathcal{T}_{\mathbf{p}}\mathbb{L}_{K}^{n}\), \(\mathbf{\overline{z}}\) is parallel transported to \(\mathbf{p}\), which gives
\[\mathbf{w}:=\mathrm{PT}_{\mathbf{\overline{0}}\rightarrow\mathbf{p}}^{K}(\mathbf{\overline{z} })=[\sinh(\sqrt{-K}a)||\mathbf{z}||,\cosh(\sqrt{-K}a)\mathbf{z}]. \tag{8}\]
Inserting Eq. 8 into Eq. 7, the formula of the Lorentz hyperplane becomes
\[\tilde{H}_{\mathbf{z},a}=\{\mathbf{x}\in\mathbb{L}_{K}^{n}\ |\ \cosh(\sqrt{-K}a) \langle\mathbf{z},\mathbf{x}_{s}\rangle-\sinh(\sqrt{-K}a)\ ||\mathbf{z}||\ x_{t}=0\}, \tag{9}\]
where \(a\) and \(\mathbf{z}\) represent the distance and orientation to the origin, respectively.
Finally, we need the distance to the hyperplane to quantify the model's confidence. It is formulated by the following theorem, proven in Appendix B.2.
**Theorem 1**: _Given \(a\in\mathbb{R}\) and \(\mathbf{z}\in\mathbb{R}^{n}\), the minimum hyperbolic distance from a point \(\mathbf{x}\in\mathbb{L}_{K}^{n}\) to the hyperplane \(\tilde{H}_{\mathbf{z},a}\) defined in Eq. 9 is given by_
\[d_{\mathcal{L}}(\mathbf{x},\tilde{H}_{\mathbf{z},a})=\frac{1}{\sqrt{-K}}\left|\sinh^{ -1}\left(\sqrt{-K}\frac{\cosh(\sqrt{-K}a)\langle\mathbf{z},\mathbf{x}_{s}\rangle- \sinh(\sqrt{-K}a)\ ||\mathbf{z}||\ x_{t}}{\sqrt{||\cosh(\sqrt{-K}a)\mathbf{z}||^{2}-(\sinh(\sqrt{-K}a) ||\mathbf{z}||)^{2}}}\right)\right|. \tag{10}\]
MLR in the Lorentz modelLebanon and Lafferty [29] formulated the logits of the Euclidean MLR classifier using the distance from instances to hyperplanes describing the class regions. Specifically, given input \(\mathbf{x}\in\mathbb{R}^{n}\) and \(C\) classes, the output probability of class \(c\in\{1,...,C\}\) can be expressed as
\[p(y=c\ |\ \mathbf{x})\propto\exp(v_{\mathbf{w}_{c}}(\mathbf{x})),\ \ v_{\mathbf{w}_{c}}(\mathbf{x})= \text{sign}(\langle\mathbf{w}_{c},\mathbf{x}\rangle)||\mathbf{w}_{c}||d(\mathbf{x},H_{\mathbf{w}_{ c}}),\ \ \mathbf{w}_{c}\in\mathbb{R}^{n}, \tag{11}\]
where \(H_{\mathbf{w}_{c}}\) is the decision hyperplane of class \(c\).
We define the Lorentz MLR without loss of generality by inserting the Lorentzian counterparts into Eq. 11. This yields logits given by the following theorem, proven in Appendix B.3.
**Theorem 2**: _Given parameters \(a_{c}\in\mathbb{R}\) and \(\mathbf{z}_{c}\in\mathbb{R}^{n}\), the Lorentz MLR's output logit corresponding to class \(c\) and input \(\mathbf{x}\in\mathbb{L}_{K}^{n}\) is given by_
\[v_{\mathbf{z}_{c},a_{c}}(\mathbf{x})=\frac{1}{\sqrt{-K}}\ \mathrm{sign}(\alpha)\beta \left|\sinh^{-1}\left(\sqrt{-K}\frac{\alpha}{\beta}\right)\right|, \tag{12}\]
_with_
\[\alpha=\cosh(\sqrt{-K}a)\langle\mathbf{z},\mathbf{x}_{s}\rangle-\sinh( \sqrt{-K}a),\] \[\beta=\sqrt{||\cosh(\sqrt{-K}a)\mathbf{z}||^{2}-(\sinh(\sqrt{-K}a)|| \mathbf{z}||)^{2}}.\]
### Lorentz residual connection and activation
Residual connectionThe residual connection is a crucial component when designing deep CNNs. As vector addition is ill-defined in the Lorentz model, we add the vector's space components and concatenate a corresponding time component. This is possible as a point \(\mathbf{x}\in\mathbb{L}_{K}^{n}\) can be defined by an arbitrary space component \(\mathbf{x}_{s}\in\mathbb{R}^{n}\) and a time component \(x_{t}=\sqrt{||\mathbf{x}_{s}||^{2}-1/K}\). Our method is straightforward and provides the best empirical performance compared to other viable methods for addition we implemented, i.e., tangent space addition [39], parallel transport addition [4], Mobius addition (after projecting to the Poincare ball) [12], and fully-connected layer addition [5].
Non-linear activationPrior works use non-linear activation in tangent space [11], which weakens the model's stability due to frequent logarithmic and exponential maps. We propose to simplify the operation for the Lorentz model by applying the activation function to the space component and concatenating a time component. For example, the Lorentz ReLU activation is given by
\[\mathbf{y}=\left[\begin{array}{c}\sqrt{||\text{ReLU}(\mathbf{x}_{s})||^{2}-1/K}\\ \text{ReLU}(\mathbf{x}_{s})\end{array}\right]. \tag{13}\]
This operation can be interpreted similarly to Euclidean activations that break linearity with heuristic mathematical projections.
## 5 Experiments
We evaluate HCNN models on image classification and generation tasks and compare them against Euclidean and hybrid HNN counterparts. To ensure a fair comparison, in every task, we directly translate a Euclidean baseline to the hyperbolic setting by using hyperbolic modules as one-to-one replacements. All experiments are implemented in PyTorch [41], and we optimize hyperbolic models using adaptive Riemannian optimizers [2] provided by Geopot [25], with floating-point precision set to 32 bits. We provide detailed experimental configurations in Appendix C and many ablation experiments in Appendix D. To facilitate reproducibility and further exploration, we make the code of our experiments publicly available at [https://github.com/kschwethelm/HyperbolicCV](https://github.com/kschwethelm/HyperbolicCV).
### Image classification
Experimental setupIn this experiment, we evaluate the performance of HCNN on standard image classification tasks using ResNet-18 [18] and three benchmark datasets: CIFAR-10 [26], CIFAR-100 [26], and Tiny-ImageNet [28]. We compare against the Euclidean network by replacing all components in the ResNet architecture with our proposed Lorentzian modules. Establishing hyperbolic baselines is difficult, as there are currently no fully hyperbolic models for vision tasks, and only hybrid models with task-specific output layers that largely do not apply to standard classification [10; 22; 31; 50]. Specifically, current vision HNNs use Euclidean CNNs for embedding images, project the final embeddings onto hyperbolic space, and only apply a hyperbolic output layer. This leaves us with a single viable hyperbolic baseline from the literature: Following Aigh et al. [1] and Guo et al. [16], we implement the hybrid approach with the Poincare MLR classifier [49]. In addition, however, we utilize our proposed Lorentz MLR to obtain a novel hybrid baseline that uses the Lorentz model instead of the Poincare ball. Finally, we apply feature clipping on embeddings of both hybrid models to prevent gradient issues arising from the hybrid architecture [16].
For all models, we adopt DeVries and Taylor's [8] training procedure and hyperparameters, which have been optimized for Euclidean CNNs and yield a strong Euclidean ResNet baseline. For example, this method increases CIFAR-100 performance from 74.84% for the original ResNet-110 [52] to 77.72% we obtained for the much smaller ResNet-18. To make a rigorous one-to-one comparison, we do not optimize the training procedure and hyperparameters for the hyperbolic models but keep the settings of [8]. This suggests that HNN performance can still be improved.
Main resultsThe results in Table 1 show that our fully hyperbolic ResNet achieves the highest accuracy on all datasets, outperforming both Euclidean and hybrid baselines. Additionally, our novel hybrid HNN based on the Lorentz model performs well, outperforming the Euclidean model on two datasets. In contrast, the hybrid Poincare HNN is inferior to the Euclidean baseline, which is consistent with the results reported by Guo et al. [16]. This suggests that the Lorentz model is better suited for HNNs than the Poincare ball. Overall, our results highlight the potential of our HCNN components and the Lorentz model in improving image classification tasks.
Adversarial robustnessPrior works have indicated that HNNs can obtain better adversarial robustness than Euclidean networks. To study this, we employ the models trained on CIFAR-100 and attack them using FGSM [14] and PGD [34] with a maximal perturbation of \(\epsilon=0.8/255,1.6/255,3.2/255\). The results in Table 2 show that our HCNN is much more robust than the other models, getting up to 5% higher accuracy. In addition, contrary to Guo et al. [16], we observe that hybrid HNNs are more susceptible to adversarial attacks than Euclidean models.
Low embedding dimensionalityHNNs have shown to be most effective in lower-dimensional spaces. To this end, we reduce the dimensionality of the final ResNet block and the embeddings and evaluate classification accuracy on CIFAR-100.
The results in Figure 3 verify the effectiveness of hyperbolic spaces with low dimensions, where all HNNs outperform the Euclidean models. However, our HCNN can leverage this advantage best, suggesting that HCNNs offer great opportunities for dimensionality reduction and designing smaller models with fewer parameters. This can be useful, e.g., in mobile deployment.
### Image generation
Experimental setupVariational autoencoders (VAEs) [23; 46] have been widely adopted in HNN research to model latent embeddings in hyperbolic spaces [20; 35; 38; 40]. However, these works rely on hybrid architectures and mainly focus on learning strong embeddings rather than targeting image generation. In this experiment, we extend the hyperbolic VAE to the fully hyperbolic setting using our proposed HCNN framework and, for the first time, evaluate its performance on image generation using the standard Frechet Inception Distance (FID) metric [19].
Building on the experimental setting of Ghosh et al. [13], we test vanilla VAEs and assess generative performance on CIFAR-10 [26], CIFAR-100 [26], and CelebA [32] datasets. We compare our HCNN-VAE against the Euclidean and two hybrid models. Following prior works, the hybrid models only include a latent hyperbolic distribution and no hyperbolic layers. Specifically, we employ the wrapped normal distributions in the Lorentz model [38] and the Poincare ball [35], respectively.
Main resultsThe results in Table 3 show that our HCNN-VAE outperforms all baselines. Likewise, the hybrid models improve performance over the Euclidean model, indicating that learning the latent embeddings in hyperbolic spaces is beneficial. However, our HCNN is better at leveraging the advantages of hyperbolic geometry due to its fully hyperbolic architecture. These results suggest that our method is a promising approach for generation and for modeling latent structures in image data.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & CIFAR-10 & CIFAR-100 & Tiny-ImageNet \\ \hline Euclidean [18] & \(95.14_{\pm 0.12}\) & \(77.72_{\pm 0.15}\) & \(65.19_{\pm 0.12}\) \\ \hline Hybrid Poincaré [1; 16] & \(95.04_{\pm 0.13}\) & \(77.19_{\pm 0.50}\) & \(64.93_{\pm 0.38}\) \\ Hybrid Lorentz (Ours) & \(94.98_{\pm 0.12}\) & \(78.03_{\pm 0.21}\) & \(65.63_{\pm 0.10}\) \\ \hline HCNN Lorentz (Ours) & \(\mathbf{95.14_{\pm 0.08}}\) & \(\mathbf{78.07_{\pm 0.17}}\) & \(\mathbf{65.71_{\pm 0.13}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification accuracy (%) of ResNet-18 models. We estimate the mean and standard deviation from five runs. The best performance is highlighted in bold (higher is better).
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{FGSM} & \multicolumn{4}{c}{PGD} \\ \cline{2-7} Max. perturbation \(\epsilon\) & 0.8/255 & 1.6/255 & 3.2/255 & 0.8/255 & 1.6/255 & 3.2/255 \\ \hline Euclidean [18] & \(65.70_{\pm 0.28}\) & \(54.98_{\pm 0.39}\) & \(39.97_{\pm 0.43}\) & \(64.43_{\pm 0.29}\) & \(49.76_{\pm 0.42}\) & \(26.30_{\pm 0.40}\) \\ \hline Hybrid Poincaré [1; 16] & \(64.68_{\pm 0.40}\) & \(53.32_{\pm 0.60}\) & \(37.52_{\pm 0.50}\) & \(63.43_{\pm 0.44}\) & \(48.41_{\pm 0.60}\) & \(23.78_{\pm 0.75}\) \\ Hybrid Lorentz (Ours) & \(65.27_{\pm 0.52}\) & \(53.82_{\pm 0.49}\) & \(40.53_{\pm 0.31}\) & \(64.15_{\pm 0.53}\) & \(49.05_{\pm 0.68}\) & \(27.17_{\pm 0.40}\) \\ \hline HCNN Lorentz (Ours) & \(\mathbf{66.47_{\pm 0.27}}\) & \(\mathbf{57.14_{\pm 0.30}}\) & \(\mathbf{43.51_{\pm 0.35}}\) & \(\mathbf{65.04_{\pm 0.28}}\) & \(\mathbf{52.25_{\pm 0.34}}\) & \(\mathbf{31.77_{\pm 0.55}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracy (%) after performing FGSM and PGD attacks on CIFAR-100. We estimate the mean and standard deviation from attacking five trained models. The best performance is highlighted in bold (higher is better).
Figure 3: CIFAR-100 accuracy obtained with lower dimensionalities in the final ResNet block. The standard embedding dimension is \(d_{E}=512\).
Analysis of latent embeddingsThe latent embedding space is a crucial component of VAEs as it influences how the data's features are encoded and used for generating the output. We visually analyze the distribution of latent embeddings inferred by the VAEs. For this, the models are retrained on the MNIST [30] dataset with an embedding dimension \(d_{E}=2\). Then, the images of the training dataset are passed through the encoder and visualized as shown in Figure 4.
We observe the formation of differently shaped clusters that correlate with the ground truth labels. While the embeddings of the Euclidean and hybrid models form many clusters that direct towards the origin, the HCNN-VAE obtains rather curved clusters that keep a similar distance to the origin. The structures within the HCNN's latent space can be interpreted as hierarchies where the distance to the origin represents hierarchical levels. As these structures cannot be found for the hybrid model, our results suggest that hybrid HNNs behave more like Euclidean models than hyperbolic models and can not fully leverage the properties of the hyperbolic space, resulting in inferior performance.
## 6 Conclusion
In this work, we proposed HCNN, a generalization of the convolutional neural network that learns latent feature representations in hyperbolic spaces. To this end, we formalized the necessary modules in the Lorentz model, deriving novel formulations of the convolutional layer, batch normalization, and multinomial logistic regression. We empirically demonstrated that ResNet and VAE models based on our HCNN framework achieve better performance on standard vision tasks than Euclidean and hybrid baselines, especially in adversarial and lower dimensional settings. Additionally, we showed that using the Lorentz model in HNNs leads to better stability and performance than the Poincare ball.
However, HCNNs are still in their early stages, and it remains to be seen to which extent they can replace Euclidean networks as they introduce mathematical complexity and computational overhead. Moreover, our HCNN framework relies on generalizations of neural network layers that were designed for Euclidean geometry and might not fully capture the unique properties of hyperbolic geometry. Further research is needed to fully understand the properties of HCNNs and address open questions such as optimization, scalability, and performance on other deep learning problems. We hope our work will inspire future research and development in this exciting and rapidly evolving field.
Figure 4: Embeddings of MNIST training data in 2D latent space of VAEs. Each embedding is represented by a point colored by the input’s ground truth label. The embeddings of Lorentz HNNs are projected onto the Poincaré ball for better visualization. Additionally, we give the generation FID.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} & \multicolumn{2}{c}{CelebA} \\ \cline{2-6} & Rec. FID & Gen. FID & Rec. FID & Gen. FID & Rec. FID & Gen. FID \\ \hline Euclidean & \(61.21_{\pm 0.72}\) & \(92.40_{\pm 0.80}\) & \(63.81_{\pm 0.47}\) & \(103.54_{\pm 0.84}\) & \(54.80_{\pm 0.29}\) & \(79.25_{\pm 0.89}\) \\ \hline Hybrid Poincaré [35] & \(59.85_{\pm 0.50}\) & \(90.13_{\pm 0.77}\) & \(62.64_{\pm 0.43}\) & \(\mathbf{98.19_{\pm 0.57}}\) & \(54.62_{\pm 0.61}\) & \(81.30_{\pm 0.56}\) \\ Hybrid Lorentz [38] & \(59.29_{\pm 0.47}\) & \(90.91_{\pm 0.84}\) & \(62.14_{\pm 0.35}\) & \(98.34_{\pm 0.62}\) & \(54.64_{\pm 0.34}\) & \(82.78_{\pm 0.93}\) \\ \hline HCNN Lorentz (Ours) & \(\mathbf{57.78_{\pm 0.56}}\) & \(\mathbf{89.20_{\pm 0.85}}\) & \(\mathbf{61.44_{\pm 0.64}}\) & \(100.27_{\pm 0.84}\) & \(\mathbf{54.17_{\pm 0.66}}\) & \(\mathbf{78.11_{\pm 0.95}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Reconstruction and generation FID of manifold VAEs. We estimate the mean and standard deviation from five runs. The best performance is highlighted in bold (lower is better).
## Acknowledgments and Disclosure of Funding
This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Wurttemberg and by the Federal Ministry of Education and Research. Ahmad Bdeir was funded by the European Union's Horizon 2020 research and innovation programme under the SustInAfrica grant agreement No 861924. Kristian Schwethelm was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 225197905.
|
2304.05207 | CGXplain: Rule-Based Deep Neural Network Explanations Using Dual Linear
Programs | Rule-based surrogate models are an effective and interpretable way to
approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans
to easily understand deep learning models. Current state-of-the-art
decompositional methods, which are those that consider the DNN's latent space
to extract more exact rule sets, manage to derive rule sets at high accuracy.
However, they a) do not guarantee that the surrogate model has learned from the
same variables as the DNN (alignment), b) only allow to optimise for a single
objective, such as accuracy, which can result in excessively large rule sets
(complexity), and c) use decision tree algorithms as intermediate models, which
can result in different explanations for the same DNN (stability). This paper
introduces the CGX (Column Generation eXplainer) to address these limitations -
a decompositional method using dual linear programming to extract rules from
the hidden representations of the DNN. This approach allows to optimise for any
number of objectives and empowers users to tweak the explanation model to their
needs. We evaluate our results on a wide variety of tasks and show that CGX
meets all three criteria, by having exact reproducibility of the explanation
model that guarantees stability and reduces the rule set size by >80%
(complexity) at equivalent or improved accuracy and fidelity across tasks
(alignment). | Konstantin Hemker, Zohreh Shams, Mateja Jamnik | 2023-04-11T13:16:26Z | http://arxiv.org/abs/2304.05207v1 | # CGXplain: Rule-Based Deep Neural Network Explanations Using Dual Linear Programs
###### Abstract
Rule-based surrogate models are an effective and interpretable way to approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans to easily understand deep learning models. Current state-of-the-art decompositional methods, which are those that consider the DNN's latent space to extract more exact rule sets, manage to derive rule sets at high accuracy. However, they a) do not guarantee that the surrogate model has learned from the same variables as the DNN (alignment), b) only allow optimising for a single objective, such as accuracy, which can result in excessively large rule sets (complexity), and c) use decision tree algorithms as intermediate models, which can result in different explanations for the same DNN (stability). This paper introduces Column **G**eneration **eXplainer** to address these limitations - a decompositional method using dual linear programming to extract rules from the hidden representations of the DNN. This approach allows optimising for any number of objectives and empowers users to tweak the explanation model to their needs. We evaluate our results on a wide variety of tasks and show that CGX meets all three criteria, by having exact reproducibility of the explanation model that guarantees stability and reduces the rule set size by \(>\)80% (complexity) at improved accuracy and fidelity across tasks (alignment).
## 1 Introduction
In spite of state-of-the-art performance, the opaqueness and lack of explainability of DNNs has impeded their wide adoption in safety-critical domains such as healthcare or clinical decision-making. A promising solution in eXplainable Artificial Intelligence (XAI) research is presented by global rule-based _surrogate models_, that approximate the decision boundaries of a DNN and represent these boundaries in simple IF-THEN-ELSE rules that make it intuitive for humans to interact with (Zilke et al., 2016; Shams et al., 2021). Surrogate models often use _decompositional_ approaches, which inspect the latent space of a DNN (e.g., its gradients) to improve performance, while _pedagogical_ approaches only utilise the inputs and outputs of the DNN.
In pursuit of the most accurate surrogate models, recent literature has primarily focussed on improving the fidelity between the DNN and the surrogate model, which refers to the accuracy of the surrogate model when predicting the DNN's outputs \(\hat{y}\) instead of the true labels \(y\). While state-of-the-art methods achieve high fidelity (Contreras et al., 2022; Espinosa et al., 2021), there are several qualitative problems with these explanations that hinder their usability in practice and have been mostly neglected in previous studies. First, if features are not fully independent, there is no guarantee that a surrogate model has learned from the same variables as the DNN, meaning that the surrogate model may provide misleading explanations that do not reflect the model's behaviour (**alignment**). Second, most rule extraction models optimise for the accuracy of the resulting rule set as a single objective, which can result in excessively large rule sets containing thousands of rules, making them impractical to use (**complexity**). Third, existing decompositional methods use tree induction to extract rules, which tends to be unstable and can result in different explanations for the same DNN, sometimes leading to more confusion than clarification (**stability**).
This paper introduces CGX (Figure 1) - a flexible rule-based decompositional method to explain DNNs at high alignment and stability, requiring only a fraction of the rules compared to current state-of-the-art methods. We combine and extend two recent innovations of decompositional explanations (i.e., using information from the hidden layers of the DNN) (Espinosa et al., 2021) and rule induction literature (i.e., generating boolean rule sets for classification) (Dash et al., 2018).
First, we suggest a paradigm shift for rule-based surrogate explanations that goes beyond optimising for accuracy as a single objective, allowing users to tailor the explanation to their needs. Concretely, we formulate the objective function of the intermediate model penalises the predictive loss as well as the number of rules and terms as a joint objective. Additionally, CGX allows to easily introduce further objectives. Second, we use a column generation approach as intermediate models, which have proven to be more accurate and stable than tree induction and other rule mining methods. Third, our algorithm introduces _intermediate error prediction_, where the information of the DNN's hidden layers is used to predict the error of the pedagogical solution (Equation 1). Fourth, we reduce the noise created by adding all rules from the DNN's latent representation by a) conducting direct _layer-wise substitution_, which reduces error propagation of the recursive substitution step used in prior methods and b) dismisses rules that do not improve the performance of the explanation model. This also reduces the need to choose between decompositional and pedagogical methods, since CGX converges to the pedagogical solution in its worst case performance.
Contributions
* _Quality metrics_: We formalise three metrics (alignment, complexity, stability) that surrogate explanations need to achieve to be feasibly applied as an explanation model across datasets.
* _Alignment_: We improve alignment between the original and surrogate models, achieving 1-2% higher fidelity of the rule-based predictions and 10-20% higher Ranked Biased Overlap (RBO) of ranked feature importance representations.
* _Complexity_: We reduce the size of the rule sets used to explain the DNN, achieving rule sets with \(>\)80% less terms compared to state-of-the-art decompositional baselines.
* _Stability_: Our explanations are guaranteed to produce identical explanations for the same underlying model.
* _Decompositional value_: We demonstrate that decompositional methods are particularly useful for harder tasks, while pedagogical methods are sufficient for simple tasks.
## 2 Related work
**XAI & Rule-based explanations** XAI research has the objective of understanding _why_ a machine learning model makes a prediction, as well as _how_ the process behind the prediction works (Arrieta et al., 2020). This helps to increase trustworthiness (Floridi, 2019), identifying causality (Murdoch et al., 2019), as well as establishing confidence (Theodorou et al., 2017), fairness (Theodorou et al., 2017), and accessibility (Adadi & Berrada, 2018) in model predictions. _Global explainability methods_ attempt to learn a representation that applies to every sample in the data, instead of only individual samples of features (local), and then provide a set of generalisable principles, commonly referred to as a _surrogate model_(Arrieta et al., 2020). Surrogate models can be either pedagogical or decompositional (Islam et al., 2021). **Pedagogical methods** train an explainable model on the predictions of the DNN \(\hat{y}\) instead of the true labels \(y\), still treating keep treating the DNN as a black-box (Confalonieri et al., 2020; Saad & Wunsch II, 2007). Pedagogical methods have a faster runtime since they ignore the latent space of the DNN, but sacrifice predictive performance (Zilke et al., 2016). **Decompositional methods** inspect the model weights or gradients and can therefore learn a closer representation of _how_ the model makes a prediction at the expense of runtime.
One promising category of global decompositional methods are rule extraction models such as DeepRED (Zilke et al., 2016), REM-D (Shams et al., 2021), ECLAIRE (Espinosa et al., 2021), and DeXIRE (Contreras et al., 2022). These methods learn a set of conjunctive (CNF) or disjunctive normal form (DNF) rules \(R_{x\to\hat{y}}\) that approximate the neural network's predictions \(\hat{y}\)(Zilke et al., 2016). Existing decompositional methods often use decision tree algorithms, such as C5.0 (Pandya & Pandya, 2015), for intermediate rule extraction. Thus, they learn rules that represent the relationship
between each hidden layer and the DNN predictions \(R_{h_{i}\mapsto\hat{g}}\), which are then recursively substituted to be rewritten in terms of the input features as \(R_{x\mapsto\hat{g}}\)(Shams et al., 2021). While existing surrogate methods achieve high fidelity, the resulting rule set \(R\) is often still too large (thousands of rules) to clarify the model's behaviour in practice. Recent research has attempted to reduce the complexity of rule-based surrogates by running different decision tree algorithms, pruning methods (Shams et al., 2021), or clause-wise substitution (Espinosa et al., 2021). However, existing rule-based surrogate algorithms are heavily dependent on tree-based models used for rule generation. Thus, the performance is significantly sacrificed if the tree depth is too heavily restricted, despite reducing the size of the rule set.
**Rule induction methods** Another approach to explainability is to use explainable-by-design models, one of which are rule-based representations. Many of these methods use rule mining which first produces a set of candidate terms and then implements a rule selection algorithm which selects or ranks the rules from that search space. The problem with this is that the search space is inherently restricted (Lakkaraju et al., 2016; Wang et al., 2017). Another class of methods, such as RIPPER (Cohen, 1995) construct their rule sets by greedily adding the conjunction that explains most of the remaining data. This approach comes with the problem that the rule sets are not guaranteed to be globally optimal and commonly result in large rule sets. Two popular state-of-the-art rule induction methods that aim to control rule set complexity are Bayesian Rule Sets (BRS) (Wang et al., 2017) and Boolean rules from Column Generation (CG). BRS use probabilistic models with prior parameters to construct small-size DNF rule sets. Column generation uses binarisation and large linear programming techniques to efficiently search over the exponential number of possible terms, where the rule set size can be restricted with a complexity constraint in the objective function. While all of the above rule induction methods could be used for the rule extraction, we chose CG due to its stability and flexible formulation of the objective function.
## 3 Methodology
### Quality Metrics
To improve on the shortcomings of existing decompositional methods, we first provide formal definitions to measure alignment, complexity, and stability. We assume an original model \(f(x)\) (DNN) with \(i\) hidden layers \(h_{i}\) and the rule-based surrogate model \(g(f(x))\) consisting of the rule set \(R_{x\mapsto\hat{g}}\) that was extracted using an intermediate model \(\psi(\cdot)\).
Figure 1: Overview of the decompositional CGX algorithm, showing the process to get from the DNN as starting point (1) to the explanation model (4) that approximates the DNN’s decision boundaries. We 1(a) extract the rule set \(R_{x\mapsto\hat{g}_{D}}\) by training an intermediate model on the DNN’s predictions, and 1(b) on the error of that initial ruleset for each hidden layer. Our intermediate extraction through Column generation (2) allows optimising for multiple objectives to extract short and concise rule sets. The substitution step (3) rewrites the intermediate rules \(I_{h_{j}\mapsto\hat{g}_{D}}\) in terms of the input variables \(I_{x\mapsto\hat{g}_{D}}\) and adds them to the surrogate model (4) if they increase its fidelity.
We define **complexity** as the size of the explanation rule set \(|R_{x\mapsto j}|\), expressed as the sum of the number of terms of all rules in \(R\), i.e., \(\min|R_{x\mapsto j}|\).
We measure **alignment** between \(f_{x}\) and \(g_{x}\) in two different ways. First, we look at the _performance alignment_ as fidelity, which measures the predictive accuracy of the model predictions \(\hat{y}_{g}\) against the original model predictions \(\hat{y}_{f}\) as \(\mu_{f,g}=\frac{1}{n}\sum_{1}^{n-1}(\hat{y}_{f}=\hat{y}_{g})\). Second, we assess the _feature alignment_ of the resulting explanations. Feature importance is a commonly used to understand which variables a model relies on when making predictions, represented as a ranked list. To validate that \(f_{x}\) and \(g_{x}\) are well-aligned, we want to ensure that both models rely on the same input features from \(X\) in their predictions. Assuming two ranked lists \(S\) and \(T\), we calculate the Ranked Biased Overlap \(\varphi_{ST}\)(Weber et al., 2010) as \(\max\varphi(S,T,p)=\max(1-p)\sum_{d=1}p^{d-1}A_{d}\), where \(A_{d}\) is the ratio of list overlap size at depth \(d\) and \(w_{d}\) is the geometric progression \(w_{d}=(1-p)p^{d-1}\), a weight vectors used to calculate the weighted sum of all evaluation depths.
Finally, we define **stability** as rule sets that are identical on repeated calls of the explanation methods with the same underlying model. We run the explanation model \(g_{x}\) on different seeds \(s=\{0,1,2...,j\}\), where we want to ensure that the rule sets are equivalent as \(R_{x\mapsto\hat{y}}(s_{1})=R_{x\mapsto\hat{y}}(s_{2})\).
### Column Generation as Intermediate Model
We hypothesise that the majority of the complexity, stability, and alignment issues stem from the choice of the intermediate model \(\psi(\cdot)\) in state-of-the-art decompositional methods. We use an adapted version of the column generation solver outlined in Dash et al. (2018). Instead of using \(\psi(\cdot)\) as a standalone model, we will show that the column generation solver is well-suited as intermediate model in decompositional methods instead of commonly used tree-based algorithms such as C4.5/C5.0 Zilke et al. (2016); Shams et al. (2021). We start with the original restricted Master Linear Program which formulates from Dash et al. (2018) the Hamming loss, which counts the number of terms that have to be removed to classify the incorrect sample correctly. The Hamming loss is bound by an error and complexity constraint. We update the negative reduced cost of the pricing subproblem from (Dash et al., 2018) to include the hyperparameters for the number of rules (\(\lambda_{0}\)) and the number of terms (\(\lambda_{1}\)), which are linked to the complexity constraint as a dual variable. This formulation also makes it simple to add further parameters to the complexity constraint and negative reduced cost (e.g, adding a constraint that penalises rules or terms for only one particular class).
### CG Explainer
ECLAIRE outperforms other decompositional methods on fidelity, rule set size, and run time. Using column generation instead of tree induction as the intermediate model \(\psi(\cdot)\), we reformulate the ECLAIRE algorithm as shown in Algorithm 1 with the core objective of improving the three quality metrics we set out. We introduce two versions of the column generation explainer - a pedagogical (CGX-ped) and a decompositional implementation (CGX-dec).
**CGX-ped** extracts rules from the intermediate model to predict the DNN predictions \(\hat{y}_{D}\). This method ignores the latent space of the DNN, but can still outperform standalone column generation by guidance of the DNN's predictions:
\[\hat{y}_{ped}=R_{x\mapsto\hat{y}_{D}}(X)=\psi(X,\hat{y}_{D}) \tag{1}\]
**CGX-dec** (Algorithm 1) introduces three key innovations over other decompositional methods. First, we do not start with an empty rule set, but uses the pedagogical solution (Equation 1) as a starting point (line 2). Second, building on the pedagogical rule set, the algorithm iterates through the hidden layers. To improve on the pedagogical solution at each layer, we run _intermediate error prediction_ by extracting rules by applying the intermediate model \(\psi(\cdot)\) to predict the prediction error of the pedagogical solution \(\hat{e}\) from each hidden layer (line 5). That is, we specifically learn rules that discriminate between false and correct prediction of the current best rule set, therefore resulting in rules that would improve this solution. The final update is the substitution method - previous approaches recursively replace the rules (Shams et al., 2021) or terms (Espinosa et al., 2021) of the hidden layer \(h_{j+1}\) with the terms for each output class from the previous layer \(h_{j}\) until all hidden rules can be rewritten in terms of the input features \(X\). Since not every hidden layer can be perfectly represented in terms of the input, the substitution step always contains an error which propagates
down the layers as the same method is applied recursively. Instead, we use the direct rule substitution step outlined in Algorithm 2. Similar to the CG solver, we first binarise our input features as rule thresholds (line 1). After computing the conjunctions of the candidate rules, we calculate the error for each candidate and select the set of candidate terms with the lowest error (Algorithm 2, line 3) compared to the hidden layer predictions (\(y_{\hat{h_{ij}}}\)). Knowing that the substitution step still contains an error, some rules contribute more to the performance than others (rules with high errors are likely to decrease predictive performance). Therefore, the last update in Algorithm 1 is that the substituted rules resulting after the substitution step are only added to the rule set if they improve the pedagogical solution (lines 9 & 10).
```
0: rule \(r_{h_{ij}\mapsto\hat{g}}\)
0: Training data \(X=\{x^{(1)},...,x^{(N)}\}\)
0: # of rule candidate combinations \(k\)
0: substituted rule(s) \(r_{x\mapsto\hat{g}}\)
1:\(X_{bin}\leftarrow\texttt{BinarizeFeatures}(X,bins)\)
2:\(r_{cand}\leftarrow\texttt{ComputeConjunctions}(k,X_{bin})\)
3:\(Errors_{cand}\gets 1-\frac{1}{N}\sum^{N-1}(\hat{y}_{h_{ij}}=\hat{y}_{r_{cand}})\)
4:\(r_{x\mapsto\hat{g}}\gets min(Errors_{r_{cand}})\)
5:return\(r_{x\mapsto\hat{g}}\)
```
**Algorithm 2** Direct rule substitution
## 4 Experiments
Given the alignment, complexity, and stability shortcomings of existing methods, we design computational experiments to answer the following **research questions**:
* **Q1.1 Performance alignment**: Does the proven higher performance of column generation rule sets lead to higher fidelity with the DNN?
* **Q1.2 Feature alignment**: How well do aggregate measures such as feature importance from the rule set align with local explanation methods of the DNN?
* **Q2 Complexity**: Can we control the trade-off between explainability (i.e., low complexity) and accuracy by optimising for a joint objective?
```
0: rule \(r_{h_{ij}\mapsto\hat{g}}\)
0: Training data \(X=\{x^{(1)},...,x^{(N)}\}\)
0: # of rule candidate combinations \(k\)
0: substituted rule(s) \(r_{x\mapsto\hat{g}}\)
1:\(X_{bin}\leftarrow\texttt{BinarizeFeatures}(X,bins)\)
2:\(r_{cand}\leftarrow\texttt{ComputeConjunctions}(k,X_{bin})\)
3:\(Errors_{cand}\gets 1-\frac{1}{N}\sum^{N-1}(\hat{y}_{h_{ij}}=\hat{y}_{r_{cand}})\)
4:\(r_{x\mapsto\hat{g}}\gets min(Errors_{r_{cand}})\)
5:return\(r_{x\mapsto\hat{g}}\)
```
**Algorithm 3** Direct rule substitution
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Input**: Training data \(X=\{x^{(1)},...,x^{(N)}\}\)
**Hyperparameter**: # of rule candidate combinations \(k\)
**Output**: substituted rule(s) \(r_{x\mapsto\hat{g}}\)
```
1:\(X_{bin}\leftarrow\texttt{BinarizeFeatures}(X,bins)\)
2:\(r_{cand}\leftarrow\texttt{ComputeConjunctions}(k,X_{bin})\)
3:\(Errors_{cand}\gets 1-\frac{1}{N}\sum^{N-1}(\hat{y}_{h_{ij}}=\hat{y}_{r_{cand}})\)
4:\(r_{x\mapsto\hat{g}}\gets min(Errors_{r_{cand}})\)
5:return\(r_{x\mapsto\hat{g}}\)
```
**Algorithm 4** Direct rule substitution
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Input**: rule \(r_{h_{ij}\mapsto\hat{g}}\)
**Output**: substituted rule(s) \(r_{x\mapsto\hat{g}}\)
**Output**: substituted rule(s) \(r_{x\mapsto\hat{g}}\)
* **Q3 Stability**: Do multiple runs of our method produce the same rule set for the same underlying model?
* **Q4 Decompositional value**: Is the performance gain of decompositional methods worth the higher time complexity compared to pedagogical methods?
### Baselines & Setup
We use both pedagogical and decompositional explanation baselines in our experiments. For pedagogical baselines, we re-purpose state-of-the-art rule induction and decision tree methods to be trained on the DNN predictions \(\hat{y}\) instead of the true labels \(y\). Concretely, we use the C5.0 decision tree algorithm (Pandya & Pandya, 2015), Bayesian Rule Sets (Wang et al., 2017), and RIPPER (Cohen, 1995). As decompositional baselines, we implement the ECLAIRE algorithm as implemented in (Espinosa et al., 2021) which has been shown to outperform other decompositional methods in both speed and accuracy. Additionally, we benchmark against the standalone Column Generation method (Dash et al., 2018) trained on the true labels \(y\) to show the benefit of applying it as an intermediate model in both pedagogical and decompositional settings. We run all baselines and our models on five different real-world and synthetic classification datasets, showing the scalability and adaptability to different numbers of samples, features, and class imbalances (Appendix A.1).
We run all experiments on five different random folds to initialise the train-test splits of the data, the random initialisations of the DNN as well as random inputs of the baselines. All experiments were run on a 2020 MacBook Pro with a 2GHz Intel i5 processor and 16 GB of RAM. For running the baselines, we use open-source implementations published in conjunction with RIPPER, BRS, and ECLAIRE, running hyperparameter search for best results as set out in the respective papers. For comparability, we use the same DNN topology (number and depth of layers) as used in the experiments in (Espinosa et al., 2021). For hyperparameter optimisation of the DNN, we use the keras implementation of the Hyperband algorithm (Li et al., 2018) to search for the optimal learning rate, hidden and output layer activations, batch normalisation, dropout, and L2 regularisation. The CGX implementation uses the MOSEK solver (Andersen & Andersen, 2000) as its cvxpy backend. The code implementation of the CGX algorithm can be found at [https://github.com/konst-int-i/cgx](https://github.com/konst-int-i/cgx).
## 5 Results
**Performance alignment (Q1.1)** The primary objective of performance alignment is the fidelity between the predictions of the rule set \(\hat{y}_{R}\) compared to the model predictions \(\hat{y}_{DNN}\), since we want an explanation model that mimics the DNNs behaviour as closely as possible. The results in Table 1 show that CGX-ped has a higher fidelity compared to the baseline methods on most datasets by approximately 1-2% whilst having significantly fewer rules. While RIPPER has a slightly higher fidelity on the MAGIC dataset, both CGX-ped and CGX-dec achieve competitive performance whilst only requiring 5% of the rules. This table also shows that a high fidelity does not guarantee a high accuracy on the overall task, which is visible on the FICO dataset. While CGX achieves a very high fidelity in this task, the overall accuracy is relatively low. This is caused by the underlying DNN struggling to perform well in this task. Notably, the performance of CGX-dec and CGX-ped is equivalent on the XOR dataset, indicating that there were no rules to add from the intermediate layers. This is because the XOR dataset is a relatively simple synthetic dataset, where the pedagogical version already identifies nearly the exact thresholds that were used to generate the target (see Figure 2c).
**Feature alignment (Q1.2)** Going beyond fidelity, the feature alignment score \(\psi\) in Figure 2a shows the mean RBO score \(\psi\) between the feature importance derived from the CGX rule set and the aggregated importance of local methods (SHAP and LIME) of the original DNN. A higher score shows that the two ranked lists are more aligned and, as such, the DNN and the rule-based surrogate model rely on the same features for their explanations more closely. Figure 2a compares the decompotional CGX-dec method to the best-performing decompotional baseline (ECLAIRE) and shows that CGX-dec achieves a higher feature alignment across all datasets compared to the baseline.
**Complexity (Q2)** Table 1 shows that both the pedagogical and decompositional methods achieve highly competitive results with only a fraction of the rules required. Compared to pedagogical baselines, CGX-ped outperforms on the majority of the tasks. While the pedagogical BRS baseline
produces fewer rules for some datasets (FICO and MB-HIST), their total number of terms are more than double those of CGX across all datasets due to longer chained rules being produced by this method. Additionally, the BRS fidelity is not competitive with CGX-ped or CGX-dec. Looking at ECLAIRE as our decompositional baseline, the results show that CGX-dec only requires a fraction of the terms compared to ECLAIRE. In the case of the Magic dataset, ECLAIRE required \(>\)100x more rules than our method, while for other datasets, the multiple ranges from 10-20x more rules required.
**Stability (Q3)** Figure 2c shows that CGX (both versions) results in identical explanations when running only the explainability method on a different random seed, keeping the data folds and random seed of the DNN identical. We observe that CGX produces the exact same rule set on repeated runs, while our decompositional baseline produces different explanations, which can be confusing to users. Note that this stability is different from the standard deviation shown in Table 1, where we would expect variation from different splits of the data and random initialisations of the DNN.
**Value of decomposition (Q4)** We acknowledge that the time complexity of decompositional methods scales linearly to the number of layers, which makes the pedagogical CGX-ped implementation an attractive alternative for very deep network topologies. To help decide whether to use pedagogical or decompositional methods, we looked at how much the information from the DNN's latent space
\begin{table}
\begin{tabular}{l l|c c c c}
**Dataset** & **Model** & **Rule Fid.** & **Rule Acc.** & **\# Rules** & **\# Terms** \\ \hline \multirow{7}{*}{XOR} & CG (standalone) & \(78.0\pm 16.8\) & \(81.1\pm 18.5\) & \(5.2\pm 1.9\) & \(21.6\pm 12.7\) \\ & Rippe (ped) & \(53.5\pm 3.9\) & \(53.8\pm 4.0\) & \(7.4\pm 3.6\) & \(14.4\pm 7.5\) \\ & BRS (ped) & \(91.3\pm 2.0\) & \(95.5\pm 1.3\) & \(9.0\pm 0.3\) & \(80.9\pm 3.0\) \\ & C5 (ped) & \(53.0\pm 0.2\) & \(52.6\pm 0.2\) & \(1\pm 0\) & \(1\pm 0\) \\ & ECLAIRE (dec) & \(91.4\pm 2.4\) & \(91.8\pm 2.6\) & \(87\pm 16.2\) & \(263\pm 49.1\) \\ & CGX-ped (ours) & \(92.4\pm 1.1\) & \(96.7\pm 1.7\) & \(3.6\pm 1.8\) & \(104\pm 7.2\) \\ & CGX-dec (ours) & \(92.4\pm 1.1\) & \(96.7\pm 1.7\) & \(3.6\pm 1.8\) & \(104\pm 7.2\) \\ \hline \multirow{7}{*}{MAGIC} & CG (standalone) & \(85.7\pm 2.5\) & \(82.7\pm 0.3\) & \(5.2\pm 0.8\) & \(13.0\pm 2.4\) \\ & Rippe (ped) & \(91.9\pm 0.9\) & \(81.7\pm 0.5\) & \(152.2\pm 14.6\) & \(462.8\pm 53.5\) \\ & BRS (ped) & \(84.6\pm 2.1\) & \(79.3\pm 1.3\) & \(5.8\pm 0.3\) & \(24.1\pm 4.8\) \\ & C5 (ped) & \(85.4\pm 2.5\) & \(82.8\pm 0.9\) & \(57.8\pm 4.5\) & \(208.7\pm 37.6\) \\ & ECLAIRE (dec) & \(87.4\pm 1.2\) & \(84.6\pm 0.5\) & \(392.2\pm 73.9\) & \(1513.4\pm 317.8\) \\ & CGX-ped (ours) & \(90.4\pm 1.7\) & \(80.6\pm 0.6\) & \(\mathbf{5.0\pm 0.7}\) & \(\mathbf{11.6\pm 1.9}\) \\ & CGX-dec (ours) & \(91.5\pm 1.3\) & \(84.4\pm 0.8\) & \(7.4\pm 0.8\) & \(11.6\pm 1.9\) \\ \hline \multirow{7}{*}{MB-ER} & CG (standalone) & \(92.1\pm 1.1\) & \(92.0\pm 1.1\) & \(5.0\pm 0.7\) & \(15.4\pm 2.2\) \\ & Rippe (ped) & \(86.5\pm 2.2\) & \(85.2\pm 3.0\) & \(22.0\pm 9.2\) & \(30.2\pm 21.6\) \\ & BRS (ped) & \(90.9\pm 1.2\) & \(88.4\pm 0.9\) & \(8.9\pm 1.1\) & \(57.6\pm 18.5\) \\ & C5 (ped) & \(89.3\pm 1\) & \(92.7\pm 0.9\) & \(21.8\pm 3\) & \(72.4\pm 14.5\) \\ & ECLAIRE (dec) & \(94.7\pm 0.2\) & \(94.1\pm 1.6\) & \(48.3\pm 15.3\) & \(137.6\pm 24.7\) \\ & CGX-ped (ours) & \(93.7\pm 1.1\) & \(92.0\pm 0.9\) & \(\mathbf{4.2\pm 0.4}\) & \(\mathbf{17.0\pm 1.9}\) \\ & CGX-dec (ours) & \(\mathbf{94.7\pm 0.9}\) & \(92.4\pm 0.7\) & \(5.9\pm 1.1\) & \(21.8\pm 3.4\) \\ \hline \multirow{7}{*}{MB-HIST} & CG (standalone) & \(88.5\pm 2.3\) & \(91.1\pm 1.4\) & \(4.0\pm 0.7\) & \(19.4\pm 2.4\) \\ & Rippe (ped) & \(86.7\pm 3.7\) & \(88.1\pm 3.3\) & \(13.8\pm 3.4\) & \(35.0\pm 11.6\) \\ & BRS (ped) & \(81.7\pm 2.1\) & \(79.9\pm 2.5\) & \(5.1\pm 0.2\) & \(40.3\pm 5.8\) \\ \cline{1-1} & C5 (ped) & \(89.3\pm 1\) & \(87.9\pm 0.9\) & \(12.8\pm 3.1\) & \(35.2\pm 11.3\) \\ \cline{1-1} & ECLAIRE (dec) & \(89.4\pm 1.8\) & \(88.9\pm 2.3\) & \(30\pm 12.4\) & \(74.7\pm 15.7\) \\ \cline{1-1} & CGX-ped (ours) & \(89.1\pm 3.6\) & \(89.4\pm 2.5\) & \(\mathbf{5.2\pm 1.9}\) & \(\mathbf{27.8\pm 7.6}\) \\ \cline{1-1} & CGX-dec (ours) & \(\mathbf{89.6\pm 3.6}\) & \(\mathbf{90.2\pm 2.5}\) & \(6.8\pm 2.0\) & \(32.2\pm 8.3\) \\ \hline \multirow{7}{*}{FICO} & CG (standalone) & \(86.4\pm 2.8\) & \(70.6\pm 0.4\) & \(3.3\pm 1.1\) & \(\mathbf{8.6\pm 3.6}\) \\ \cline{1-1} & Rippe (ped) & \(88.8\pm 2.8\) & \(70.2\pm 1.0\) & \(99.2\pm 14.5\) & \(307.4\pm 41.6\) \\ \cline{1-1} & BRS (ped) & \(84.8\pm 2.3\) & \(65.4\pm 2.1\) & \(3.1\pm 0.2\) & \(18\pm 3.2\) \\ \cline{1-1} & C5 (ped) & \(72.7\pm 2.1\) & \(81.8\pm 1.6\) & \(34.8\pm 4.1\) & \(125.6\pm 35.2\) \\ \cline{1-1} & ECLAIRE (dec) & \(66.5\pm 2.5\) & \(\mathbf{84.9\pm 1.7}\) & \(161.0\pm 12.3\) & \(298.0\pm 21.2\) \\ \cline{1-1} & CGX-ped (ours) & \(91.1\pm 0.1\) & \(70.5\pm 0.8\) & \(\mathbf{3.6\pm 1.1}\) & \(\mathbf{9.6\pm 3.6}\) \\ \cline{1-1} & CGX-dec (ours) & \(\mathbf{92.4\pm 0.2}\) & \(71.4\pm 1\) & \(5.1\pm 1.3\) & \(13.4\pm 2.1\) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of CGX-ped and CGX-dec performance alignment (fidelity) and complexity (# terms) compared to the baselines across datasets. CGX-ped outperforms all baselines across the majority of tasks. While RIPPER has a slightly higher fidelity on the MAGIC dataset, CGX only requires \(\sim\)5% of the terms.
(lines 3-10 in Algorithm 1) improves the pedagogical solution (line 2 in Algorithm 1). Figure 1(b) shows that the added performance gained from information of the hidden layers is related to the difficulty of the task. For "easy" tasks, (i.e., those where the DNN has a high accuracy/AUC such as the XOR task), CGX-ped and CGX-dec converge to the same solution, since no rules from the hidden layers increase the fidelity. Figure 1(b) shows that the performance difference increases with the difficulty of the task. For the FICO task, where the DNN accuracy is only just over 70%, the surrogate model gains the most information from the hidden layers.
## 6 Discussion
This paper introduces a global decompositional method that uses column generation as intermediate models. We improve rule-based explanations by intermediate error predictions from the latent space of a DNN, coupled with layer-wise substitution to reduce error propagation. CGX enables research and industry to customise surrogate explanations for different end users by parameterising the accuracy-explainability trade-off. First, we introduced a quantitative measure to analyse the **feature alignment** between the surrogate model and local explanations of the DNN and show that our surrogate model explanations are more closely aligned to other local explanation methods of the original model. Second, the design of the objective functions allows assigning a higher cost to surrogate model complexity (i.e., the number of terms in the rule set) using an extra hyperparameter. We demonstrate that this achieves significantly **lower complexity** and enables users to control the accuracy-interpretability trade-off by setting higher or lower penalties on the number of rules. Third, the results show that CGX is independent of its initialisation (solution to the Master Linear Program), which leads to **improved stability** compared to methods using tree induction for rule extraction. Additionally, CGX requires **fewer hyperparameters** compared to tree-based algorithms such as C5, hence requiring less fine-tuning to achieve competitive results. While this introduces the lambda parameter to enable users to control the length of the resulting rule set, it is also possible to run the solver unconstrained. Beyond these benefits, having rule-based surrogate models enables end users to _intervenability_ by users, as they can amend the rule set to encode further domain knowledge.
The key limitation of CGX and decompositional methods more generally is that the runtime is highly dependent on the number of hidden DNN layers and the number of columns in \(X\). We attempt to mitigate this problem by showing that CGX-ped is a highly competitive alternative, especially for simple tasks. For more difficult tasks, however, the decompositional method still delivers better explanations with higher fidelity. The implementation will be open-sourced as a pip-installable Python package.
Figure 2: Overview of CGX performance with respect to alignment (a), task difficulty (b), and stability (c). Subfigure (a) shows the mean Ranked Biased Overlap of CGX compared to ECLAIRE shows that CGX’s rule set show a higher _feature alignment_. Subfigure (b) is comparing task difficulty (DNN prediction error, x-axis) with the incremental fidelity improvement (y-axis) when using CGX-dec over CGX-ped. As tasks get more difficult, using the CGX-dec adds relatively more fidelity compared to the CGX-ped. Subfigure (c) shows that CGX has exact reproducibility for the same underlying model.
###### Acknowledgements.
KH acknowledges support from the Gates Cambridge Trust via the Gates Cambridge Scholarship.
|
2303.01978 | Robust One-Class Classification with Signed Distance Function using
1-Lipschitz Neural Networks | We propose a new method, dubbed One Class Signed Distance Function (OCSDF),
to perform One Class Classification (OCC) by provably learning the Signed
Distance Function (SDF) to the boundary of the support of any distribution. The
distance to the support can be interpreted as a normality score, and its
approximation using 1-Lipschitz neural networks provides robustness bounds
against $l2$ adversarial attacks, an under-explored weakness of deep
learning-based OCC algorithms. As a result, OCSDF comes with a new metric,
certified AUROC, that can be computed at the same cost as any classical AUROC.
We show that OCSDF is competitive against concurrent methods on tabular and
image data while being way more robust to adversarial attacks, illustrating its
theoretical properties. Finally, as exploratory research perspectives, we
theoretically and empirically show how OCSDF connects OCC with image generation
and implicit neural surface parametrization. Our code is available at
https://github.com/Algue-Rythme/OneClassMetricLearning | Louis Bethune, Paul Novello, Thibaut Boissin, Guillaume Coiffier, Mathieu Serrurier, Quentin Vincenot, Andres Troya-Galvis | 2023-01-26T15:40:10Z | http://arxiv.org/abs/2303.01978v2 | # Robust One-Class Classification with Signed Distance Function
###### Abstract
We propose a new method, dubbed One Class Signed Distance Function (OSCDF), to perform One Class Classification (OCC) by provably learning the Signed Distance Function (SDF) to the boundary of the support of any distribution. The distance to the support can be interpreted as a normality score, and its approximation using 1-Lipschitz neural networks provides robustness bounds against \(l2\) adversarial attacks, an under-explored weakness of deep learning-based OCC algorithms. As a result, OCSDF comes with a new metric, certified AUROC, that can be computed at the same cost as any classical AUROC. We show that OCSDF is competitive against concurrent methods on tabular and image data while being way more robust to adversarial attacks, illustrating its theoretical properties. Finally, as exploratory research perspectives, we theoretically and empirically show how OCSDF connects OCC with image generation and implicit neural surface parametrization. Our code is available at [https://anonymous.4open.science/r/CSDFL](https://anonymous.4open.science/r/CSDFL)
Machine Learning, ICML
## 1 Introduction
One class classification (OCC) is an instance of binary classification where all the points of the dataset at hand belong to the same (positive) class. The challenge of this task is to construct a decision boundary without using points from the other (negative) class. It has various safety-critical applications in anomaly detection, for instance to detect banking fraud, cyber-intrusion or industrial defect, in out-of-distribution detection, to prevent wrong decisions of Machine Learning models, or in Open-Set-Recognition. However, OCC algorithms suffer from limitations such as the **lack of negative data**, and **robustness issues**(Azizmalayeri et al., 2022), the latter being an under-explored topic in the OCC spectrum. Even though some algorithms do not use negative examples, many work cope with the lack of negative data with Negative Sampling, either artificially (Sipple, 2020) or using outlier exposure (Hendrycks and Dietterich, 2019; Fort et al., 2021). However, such samplings are often biased or heuristic. As for robustness, although some works design robust algorithms (Goyal et al., 2020; Lo et al., 2022), it is always only empirically demonstrated (Hendrycks and Dietterich, 2019).
In this paper, we introduce a new framework to perform OCC based on the Signed Distance Function (SDF), a function traditionally used in computer graphics. Assume the positive samples are independently and identically obtained from a distribution \(\mathbb{P}_{X}\) with compact support \(\mathcal{X}\subset\mathbb{R}^{d}\). Let \(\partial\mathcal{X}=\overline{\mathcal{X}}/\dot{\mathcal{X}}\) be the boundary of the distribution. The Signed Distance Function is the function \(\mathcal{S}:\mathbb{R}^{d}\rightarrow\mathbb{R}\):
\[\mathcal{S}(x)=\begin{cases}d(x,\partial\mathcal{X})&\text{if }x\in \mathcal{X},\\ -d(x,\partial\mathcal{X})&\text{otherwise},\end{cases} \tag{1}\]
where \(d(x,\partial\mathcal{X})=\inf_{z\in\partial\mathcal{X}}\|x-z\|_{2}\). The idea of our algorithm, which we call One Class Signed Distance Function (OCSDF) is to learn the SDF to the boundary of the positive data distribution and use it as a normality score. We show that the Hinge Kantorovich-Rubinstein (HKR) loss introduced by (Serrurier et al., 2021) allows provably learning the SDF with a 1-Lipschitz network.
SDF exhibits desirable properties. First, by implicitly parametrizing the domain \(\mathcal{X}\), it allows efficiently sampling points outside of \(\mathcal{X}\) and performing principled Negative Sampling. Second, the SDF fulfils the Eikonal equation: \(\|\nabla_{x}\mathcal{S}(x)\|=1\). In particular, \(\mathcal{S}\) is 1-Lipschitz with respect to \(l2\text{-nom : }\forall x,z\in\mathbb{R}^{d},\|\mathcal{S}(x)- \mathcal{S}(z)\|_{2}\leq\|x-z\|_{2}\). This property provides exact robustness certificates for OCSDF in the form of a certified AUROC that can be computed at the same cost as AUROC. This regularity translates into solid empirical robustness as compared to other OCC baselines. In other words, OCSDF alleviates the **lack of negative data** and the **robustness issue**. We go further and highlight in
teresting research perspectives regarding OCSDF. Indeed, we show that learning the SDF with a 1-Lipschitz network enables a generative procedure that allows visualizing points at the boundary of \(\mathcal{X}\). Moreover, It implicitly parametrizes the shape of \(\mathcal{X}\), which connects One-Class Classification with implicit surface parametrization, intensively used in computer graphics for shape reconstruction.
Our contributions are as follows. **(1)** We introduce a new OCC framework based on the Signed Distance Function to the boundary of the data distribution. We theoretically demonstrate that the SDF can be learned with a 1-Lipschitz neural net using the Hinge Kantorovich-Rubinstein (HKR) loss and Negative Sampling; **(2)** We evaluate the performances of OCSDF on several benchmarks and show its benefits for theoretical and empirical robustness; and **(3)** we demonstrate how OCSDF extends the applications of One Class Classification from traditional OOD detection to generative visualization and implicit surface parametrization for shape reconstruction from point clouds.
## 2 Related Work
One Class Classification (OCC)OCC is an instance of binary classification where all the points of the dataset at hand belong to the same (positive) class. The challenge of this task is to construct a decision boundary without using points from the other (negative) class. OCC amounts to finding a domain containing the support of the data distribution. That is why OCC is mainly used in Out Of Distribution (OOD), anomaly or novelty detection, with positive samples considered In Distribution (ID) and negative ones as OOD, anomalies or novelties. This task dates back to (Sager, 1979; Hartigan, 1987) and was popularized for anomaly detection with One-class Support Vector Machines (OC-SVM)(Scholkopf et al., 1999). Since then, the field of OCC has flourished with many well-established algorithms such as Local Outlier Factors (Breunig et al., 2000), Isolation Forests (Liu et al., 2008) and their variants (see (Han et al., 2022) for a thorough benchmark). More recently, since Deep-SVDD (Ruff et al., 2018) - followed by several works such as (Bergman and Hoshen, 2019; Golan and El-Yaniv, 2018; Goyal et al., 2020; Zenati et al., 2018; Sabokrou et al., 2018) - Deep Learning has emerged as a relevant alternative to perform OCC due to its capacities to handle large dimensional data. However, methods of this field suffer from their lack of **robustness and certifications**, which makes them vulnerable to adversarial attacks. In addition, they always struggle to cope with the **lack of OOD data**. In this paper, we tackle these problems with an OCC algorithm based on approximating the SDF using **1-Lipschitz neural nets**. In addition, the SDF being intensively used in Computer Graphics, our algorithm establishes a new link between OCC and **implicit surface** parametrization.
SDF for neural implicit surfacesHistorically, signed distance functions have been used in computer graphics to parametrize a surface as the level set of some function (Novello et al., 2022). Given an incomplete or unstructured representation of a geometrical object (like a 3D point cloud or a triangle soup), recent methods aim at representing a smooth shape either as vectors in the latent space of a generative model (Achlioptas et al., 2018; Ben-Hamu et al., 2018; Groueix et al., 2018; Chou et al., 2022) or directly as parameters of a neural net (Park et al., 2019; Atzmon and Lipman, 2020). The first method allows for easy shape interpolation, while the latter proved to be a more robust approach (Davies et al., 2021). Those neural implicit surfaces alleviate both the problems related to memory requirements of voxel-based representations and the combinatorial nature of meshes, making them ideally suited for rendering using ray marching (Hart, 1995) and constructive solid geometry. In those contexts, the constraint \(\|\nabla_{x}f(x)\|\leq 1\) is necessary to guarantee the validity of the geometrical query while having \(\|\nabla_{x}f(x)\|\) as close as possible to 1 allows for greedier queries and faster computation times. In practice,
Figure 1: Summary of One Class Signed Distance Function (OCSDF). We start with an uniform negative sampling, then we fit a 1-Lipschitz classifier \(f_{\theta}\) using the Hinge Kantorovich-Rubinstein loss. We apply the Adapted Newton Raphson algorithm 1 to attract the points towards the boundary of the domain \(\partial\mathcal{X}\) thanks to the smoothness of \(f_{\theta}\), which in addition allows providing robustness certificates.
training an SDF requires a dataset \((p,d)\) of points \(p\in\mathbb{R}^{3}\) with their corresponding signed distance \(d\) to the desired surface. Computing those distances requires the existence and availability of a ground truth, which is not always the case. Moreover, training tends to be unstable in general, and special care is needed for most computer graphics applications (Sharp and Jacobson, 2022). Our method can instead be trained to approximate a surface without prior knowledge of the distances and is provably robust.
1-Lipschitz neural netsAs noticed in (Bethune et al., 2022; Brau et al., 2023) 1-Lipschitz neural nets (Stasiak and Yatsymirskyy, 2006; Li et al., 2019; Su et al., 2022) are naturally linked to the signed distance function. In particular, they are 1-Lipschitz, i.e. they fulfil \(\|\nabla_{x}f(x)\|\leq 1\) on the whole input space. They boast a rich literature, especially for convolutional neural nets (Gayer and Sheshkus, 2020; Wang et al., 2020; Liu et al., 2021; Achour et al., 2021; Li et al., 2019; Trockman and Kolter, 2021; Singla and Feizi, 2021). These networks benefit from several appealing properties: they are not subject to exploding nor vanishing gradients (Li et al., 2019), they generalize well (Bartlett et al., 2019; Bethune et al., 2022), and they are elegantly connected to optimal transport theory (Arjovsky et al., 2017; Serrurier et al., 2021). 1-Lipschitz neural nets also benefit from certificates against \(l2\)-attacks (Li et al., 2019; Tsuzuku et al., 2018); hence the approximation of \(\mathcal{S}\) is robust against \(l2\)-adversarial attacks _by design_.
Robustness and certificationWhile robustness comes with many aspects, this work focuses mainly on adversarial attacks (Szegedy et al., 2014). Extensive literature explores the construction of efficient attacks (Goodfellow et al., 2014) (Brendel et al., 2018) (Carlini and Wagner, 2017). As nearly any deep learning architecture is vulnerable, defences have also been developed with notably adversarial training (Madry et al., 2018) (Zhang et al., 2019)(Shafahi et al., 2019), or randomized smoothing (Cohen et al., 2019)(Carlini et al., 2022). Since early works pointed out the link between the Lipschitz constant of a network and its robustness, Lipschitz-constrained networks have also been studied (Anil et al., 2019) (Serrurier et al., 2021). Similarly to classifiers, OCC Algorithms based on deep neural nets suffer from their natural weakness to adversarial attacks (Azizmalayeri et al., 2022). Although some works design robust algorithms (Goyal et al., 2020; Lo et al., 2022), the robustness achieved is always only empirically demonstrated (Hendrycks and Dietterich, 2019). Few works provide theoretical certifications (we only found (Bitterwolf et al., 2020) based on interval bounds propagation). In this work, we leverage the properties of 1-Lipschitz networks to provide certifications.
Tackling the lack of OOD dataThe previously mentioned OCC and OOD algorithms, as well as many others (Hendrycks and Gimpel, 2018; Hsu et al., 2020) are designed to avoid the need for OOD data. However, some works aim at falling back to classical binary classification by artificially generating negative samples. The idea of Negative Sampling is not recent and appeared in (Forrest et al., 1994) for detecting computer viruses or to emulate the distinction made by antibodies between pathogens and body cells (Gonzalez et al., 2002). It has been introduced in anomaly detection by (Ayara et al., 2002) and studied by several works summarized in (Jinyin and Dongyong, 2011), but has lost popularity due to its practical inefficiency (e.g. compared to One-Class Support Vector Machines (OCSVM) (Stibor et al., 2005)). Recently, some works revived the idea of using OOD data, either by artificial negative sampling (Lee et al., 2018; Sipple, 2020; Goyal et al., 2020; Pourreza et al., 2021), or by using OOD data from other sources, a procedure called outlier exposure (Fort et al., 2021; Hendrycks et al., 2019). However, outlier exposure suffers from bias since OOD data does not come from the same data space. Therefore, we follow the first idea and sample negative data points close to the domain \(\mathcal{X}\), thanks to the orthogonal neural nets-based estimation of the SDF.
## 3 Method
The method aims to learn the Signed Distance Function (SDF) by reformulating the one-class classification of \(\mathbb{P}_{X}\) as a binary classification of \(\mathbb{P}_{X}\) against a carefully chosen distribution \(Q(\mathbb{P}_{X})\). We show that this formulation yields desirable properties, especially when the chosen classifier is a 1-Lipschitz neural net trained with the Hinge Kantorovich-Rubinstein (HKR) loss.
### SDF learning formulated as binary classification
We formulate SDF learning as a binary classification that consists of classifying samples from \(\mathbb{P}_{X}\) against samples from a complementary distribution, as defined below.
**Definition 1** (\(\overset{B,\epsilon}{\sim}\) Complementary Distribution (informal)): _Let \(Q\) be a distribution of compact support included in \(B\), with disjoint support from that of \(\mathbb{P}_{X}\) that "fills" the remaining space, with \(2\epsilon\) gap between \(\mathcal{X}\) and \(\text{supp}\ Q\). Then we write \(Q\overset{B,\epsilon}{\sim}\mathbb{P}_{X}\)._
A formal definition is given in Appendix A. Binary classification between \(\mathbb{P}_{X}\) and any \(Q\overset{B,\epsilon}{\sim}\mathbb{P}_{X}\) allows the construction of the optimal signed distance function, using the Kantorovich-Rubinstein (HKR) Hinge loss (Serrurier et al., 2021), thanks to the following theorem.
**Theorem 1**: **SDF Learning with HKR loss.** _Let \(\mathcal{L}_{m,\lambda}^{\text{hkr}}(yf(x))=\lambda\max\left(0,m-yf(x)\right)-yf(x)\) be the Hinge Kantorovich Rubinstein loss, with margin \(m=\epsilon\), regularization \(\lambda>0\), prediction \(f(x)\) and label \(y\in\{-1,1\}\). Let \(Q\) be a probability distribution on \(B\). Let \(\mathcal{E}^{\text{hkr}}(f)\) be the population
risk:_
\[\begin{split}\mathcal{E}^{\text{h}kr}(f,\mathbb{P}_{X},Q):=& \mathbb{E}_{x\sim\mathbb{P}_{X}}[\mathcal{L}^{\text{h}kr}_{m,\lambda}(f(x))]\\ &+\mathbb{E}_{z\sim Q}[\mathcal{L}^{\text{h}kr}_{m,\lambda}(-f(z)) ].\end{split} \tag{2}\]
_Let \(f^{*}\) be the minimizer of population risk, whose existence is guaranteed with Arzela-Ascoli theorem (Bethune et al., 2022):_
\[f^{*}\in\operatorname*{arg\,inf}_{f\in\mathcal{L}\text{ip}_{1}(\mathbb{R}^{d}, \mathbb{R})}\mathcal{E}^{\text{h}kr}(f,\mathbb{P}_{X},Q), \tag{3}\]
_where \(\text{Lip}_{1}(\mathbb{R}^{d},\mathbb{R})\) is the set of Lipschitz functions \(\mathbb{R}^{d}\to\mathbb{R}\) of constant \(1\). Assume that \(Q\overset{B,\epsilon}{\sim}\mathbb{P}_{X}\). **Then**, \(f^{*}\) approximates the signed distance function over \(B\):_
\[\begin{split}\forall x\in\mathcal{X},&\mathcal{S}(x )=f^{*}(x)-m,\\ \forall z\in\text{supp }Q,&\mathcal{S}(z)=f^{*}(z)-m.\end{split} \tag{4}\]
_Moreover, for all \(x\in\text{supp }Q\cup\mathcal{X}\):_
\[\text{sign}(f(x))=\text{sign}(\mathcal{S}(x)).\]
Note that if \(m=\epsilon\ll 1\), then we have \(f^{*}(x)\approx\mathcal{S}(x)\). In this work, we parametrize \(f\) as a 1-Lipschitz neural network, as defined below, because they fulfil \(f\in\text{Lip}_{1}(\mathbb{R}^{d},\mathbb{R})\) by construction.
**Definition 2** (1-Lipschitz neural network (informal)):
_Neural network with Groupsort activation function and orthogonal transformation in affine layers parameterized like in (Anil et al., 2019)._
Details about the implementation can be found in Appendix D. Theorem 1 tells us that if we characterize the complementary distribution \(Q\), we can approximate the SDF with a 1-Lipschitz neural classifier trained with HKR loss. We now need to find the complementary distribution \(Q\).
### Finding the complementary distribution by targeting the boundary
We propose to seek \(Q\) through an alternating optimization process: at every iteration \(t\), a proposal distribution \(Q_{t}\) is used to train a 1-Lipschitz neural net classifier \(f_{t}\) against \(\mathbb{P}_{X}\) by minimizing empirical HKR loss. Then, the proposal distribution is updated in \(Q_{t+1}\) based on the loss induced by \(f_{t}\), and the procedure is repeated.
We suggest starting from the uniform distribution: \(Q_{0}=\mathcal{U}(B)\). Observe that in high dimension, due to the _curse of dimensionality_, a sample \(z\sim Q_{0}\) is unlikely to satisfy \(z\in\mathcal{X}\). Indeed the data lies on a low dimensional manifold \(\mathcal{X}\) for which the Lebesgue measure is negligible compared to \(B\). Hence, in the limit of small sample size \(n\ll\infty\), a sample \(Z_{n}\sim Q_{0}^{\otimes n}\) fulfills \(Z_{n}\overset{B,\epsilon}{\sim}\mathbb{P}_{X}\). This phenomenon is called the Concentration Phenomenon and has already been leveraged in anomaly detection in (Sipple, 2020). However, the _curse_ works both ways and yields a high variance in samples \(Z_{n}\). Consequently, the variance of the associated minimizers \(f_{0}\) of equation 3 will also exhibit a high variance, which may impede the generalization and convergence speed. Instead, the distribution \(Q_{t}\) must be chosen to produce higher density in the neighborhood of the boundary \(\partial\mathcal{X}\). The true boundary is unknown, but the level set \(\mathbb{L}_{t}=f_{t}^{-1}(\{-\epsilon\})\) of the classifier can be used as a proxy to improve the initial proposal \(Q_{0}\). We start from \(z_{0}\sim Q_{0}\), and then look for a displacement \(\delta\in\mathbb{R}^{d}\) such that \(z+\delta\in\mathbb{L}_{t}\). To this end, we take inspiration from the multidimensional Newton-Raphson method and consider a linearization of \(f_{t}\):
\[f_{t}(z_{0}+\delta)\approx f_{t}(z_{0})+\langle\nabla_{x}f_{t}(z_{0}),\delta\rangle. \tag{5}\]
Since 1-Lipschitz neural nets with GroupSort activation function are piecewise _affines_(Tanielian and Biau, 2021), the linearization is locally exact, hence the following property.
**Property 1**.: _Let \(f_{t}\) be a 1-Lipschitz neural net with GroupSort activation function. Almost everywhere on \(z_{0}\in\mathbb{R}^{d}\), there exists \(\delta_{0}>0\) such that for every \(\|\delta\|\leq\delta_{0}\), we have:_
\[f_{t}(z_{0}+\delta)=f_{t}(z_{0})+\langle\nabla_{x}f_{t}(z_{0}),\delta\rangle. \tag{6}\]
Since \(f_{t}(z_{0}+\delta)\in\mathbb{L}_{t}\) translates into \(f_{t}(z_{0}+\delta)=-\epsilon\),
\[\delta=-\frac{f_{t}(z_{0})+\epsilon}{\|\nabla_{x}f_{t}(z_{0})\|^{2}}\nabla_{x}f _{t}(z_{0}). \tag{7}\]
Properties of \(\mathcal{L}^{\text{h}kr}_{m,\lambda}\) ensure that the optimal displacement follows the direction of the gradient \(\nabla_{x}f_{t}(z_{0})\), which coincides with the direction of an optimal transportation plan (Serrurier et al., 2021). The term \(\|\nabla_{x}f_{t}(z_{0})\|\) enjoys an interpretation as a Local Lipschitz Constant (see (Jordan and Dimakis, 2020)) of \(f_{t}\) around \(z_{0}\), which we know fulfills \(\|\nabla_{x}f_{t}(z_{0})\|\leq 1\) when parametrized with an 1-Lipschitz neural net. When \(f_{t}\) is trained to perfection, the expression for \(\delta\) simplifies to \(\delta=-f_{t}(z_{0})\nabla_{x}f_{t}(z_{0})\) thanks to Property 2.
**Property 2** (Minimizers of \(\mathcal{L}^{\text{h}kr}_{m,\lambda}\) are Gradient Norm Preserving (from (Serrurier et al., 2021))).: _Let \(f_{t}^{*}\) be the solution of Equation 3. Then for almost every \(z\in B\) we have \(\|\nabla_{x}f_{t}^{*}(z)\|=1\)._
In practice, the exact minimizer \(f_{t}^{*}\) is not always retrieved, but equation 7 still applies to imperfectly fitted classifiers. The final sample \(z^{\prime}\sim Q_{t}\) is obtained by generating a sequence of \(T\) small steps to smooth the generation. The procedure is summarized in algorithm 1. In practice, \(T\) can be chosen very low (below \(16\)) without significantly hurting the quality of generated samples. Finally, we pick a random "learning rate" \(\eta\sim\mathcal{U}([0,1])\) for each negative example in
the batch to ensure they distribute evenly on the path toward the boundary. The procedure also benefits from Property 4, which ensures that the distribution \(Q_{t+1}\) obtained from \(Q_{t}\) across several iterative applications of Algorithm 1 still fulfils \(Q^{t+1}\stackrel{{ B,\epsilon}}{{\sim}}\mathbb{P}_{X}\). The proof is given in Appendix B.
**Property 3** (Complementary distributions are fix points).: _Let \(Q^{t}\) be such that \(Q^{t}\stackrel{{ B,\epsilon}}{{\sim}}\mathbb{P}_{X}\). Assume that \(Q^{t+1}\) is obtained with algorithm 2. Then we have \(Q^{t+1}\stackrel{{ B,\epsilon}}{{\sim}}\mathbb{P}_{X}\)._
```
0: 1-Lipschitz neural net \(f_{t}\)
0: number of steps \(T\)
0: sample \(z^{\prime}\sim Q_{t}(f)\)
1: sample learning rate \(\eta\sim\mathcal{U}([0,1])\)
2:\(z_{0}\sim\mathcal{U}(B)\) { Initial approximation.}
3:for each step \(t=1\) to \(T\)do
4:\(z_{t+1}\gets z_{t}-\frac{\nabla_{x}f(z^{t})}{\|\nabla_{x}f(z^{t})\|_{2}^{ 2}}(f(z_{t})+\epsilon)\){Refining.}
5:\(z_{t+1}\leftarrow\Pi_{B}(z_{t+1})\){ Stay in feasible set.}
6:endfor
7:return\(z_{T}\)
```
**Algorithm 1** Adapted Newton-Raphson for Complementary Distribution Generation
**Remark**.: _In high dimension \(d\gg 1\), when \(\|\nabla_{x}f_{t}(z)\|=1\) and \(\text{Vol}(B)\gg\text{Vol}(\mathcal{X})\) the samples obtained with algorithm 1 are approximately uniformly distributed in the levels of \(f_{t}\). It implies that the density of \(Q\) increases exponentially fast (with factor \(d\)) with respect to the value of \(-|f_{t}(\cdot)|\). This mitigates the adverse effects of the curse of dimensionality._
This scheme of "generating samples by following gradient in input space" reminds diffusion models (Ho et al., 2020), feature visualization tools (Olah et al., 2017), or recent advances in VAE (Kuzina et al., 2022). However, no elaborated scheme is required for the training of \(f_{t}\): 1-Lipschitz networks exhibit smooth and interpretable gradients (Serrurier et al., 2022) which allows sampling from \(\mathcal{X}\) "for free" as illustrated in figure 4.
**Remark**.: _A more precise characterization of \(Q_{t}\) built with algorithm 1 can be sketched below. Our approach bares some similarities with the spirit of Metropolis-adjusted Langevin algorithm (Grenander and Miller, 1994). In this method, the samples of \(p(x)\) are generated by taking the stationary distribution \(x_{t\to\infty}\) of a continuous Markov chain obtained from the stochastic gradient step iterates_
\[x_{t+1}\gets x_{t}+\zeta\nabla_{x}\log p(x)+\sqrt{2\zeta}Z \tag{8}\]
_for some distribution \(p(x)\) and \(Z\sim\mathcal{N}(\mathbf{0},\mathbf{1})\). By choosing the level set \(\epsilon=0\), and \(p(x)\propto\mathds{1}\{f(x)\leq 0\}\exp{(-\eta f^{2}(x))}\) the score function \(\zeta\nabla_{x}\log p(x)\) is transformed into \(\nabla_{x}f(x)|f(x)|\) with \(\zeta=\frac{\eta}{T}\). Therefore, we see that the density decreases exponentially faster with the squared distance to the boundary \(\partial\mathcal{X}\) when there are enough steps \(T\gg 1\). In particular when \(\mathcal{X}=\{0\}\) we recover \(p(x)\) as the pdf of a standard Gaussian \(\mathcal{N}(\mathbf{0},\mathbf{1})\). Although the similarity is not exact (e.g., the diffusion term \(\sqrt{2\eta}Z\) is lacking, \(T\) is low, \(\eta\sim\mathcal{U}([0,1])\) is a r. v.), it provides interesting complementary insights on the algorithm._
### Alternating minimization for SDF learning
Each classifier \(f_{t}\) does not need to be trained from scratch. Instead, the same architecture is kept throughout training, and the algorithm produces a sequence of parameters \(\theta_{t}\) such that \(f_{t}=f_{\theta_{t}}\). Each set of parameters \(\theta_{t}\) is used as initialization for the next one \(\theta_{t+1}\). Moreover, we only perform a low fixed number of parameter updates for each \(t\) in a GAN fashion. The final procedure of OCSDF is shown in Figure 1 and detailed in algorithm 3 of Appendix B.
## 4 Properties
### Certificates against adversarial attacks
The most prominent advantage of 1-Lipschitz neural nets is their ability to produce certificates against adversarial attacks (Szegedy et al., 2014). Indeed, by definition we have \(f(x+\delta)\in[f(x)-\|\delta\|,f(x)+\|\delta\|]\) for every example \(x\in\mathcal{X}\) and every adversarial attack \(\delta\in\mathbb{R}^{d}\). This allows bounding the changes in AUROC score of the classifier for every possible radius \(\epsilon>0\) of adversarial attacks.
**Proposition 1** (certifiable AUROC).: _Let \(F_{0}\) be the cumulative distribution function associated with the negative classifier's prediction (when \(f(x)\leq 0\)), and \(p_{1}\) the probability density function of the positive classifier's prediction (when \(f(x)>0\)). Then, for any attack of radius \(\epsilon>0\), the AUROC of the attacked classifier \(f_{\epsilon}\) can be bounded by_
\[\text{AUROC}_{\epsilon}(f)=\int_{-\infty}^{\infty}F_{0}(t)p_{1}(t-2\epsilon)dt. \tag{9}\]
The proof is left in Appendix C. The certified AUROC score can be computed analytically without performing the attacks empirically, solely from score predictions \(f_{1}(\tau-2\epsilon)\). More importantly, the certificates hold against _any_ adversarial attack whose \(l2\)-norm is bounded by \(\epsilon\), regardless of the algorithm used to perform such attacks. We emphasize that producing certificates is more challenging than traditional defence mechanisms (e.g, adversarial training, see (Bai et al., 2021) and references therein) since they do not target defence against a specific attack method.
### Rank normal and anomalous examples
Beyond the raw certifiable AUROC score, l2-based Lipschitz robustness enjoys another desirable property: the farther from the boundary the adversary is, the more important the attack budget required to change the score. In practice, it means that an attacker can only dissimulate anomalies already close to being "normal". Aberrant and hugely anomalous examples will require a larger (possibly impracticable) attack budget. This ensures that the _normality score_ is ac
tually meaningful: it is high when the point is central in its local cloud, and low when it is far away.
## 5 Experiments
In this section, we evaluate the performances and properties of OCSDF on tabular and image data. All the experiments are conducted with tensorflow, and the 1-Lipschitz neural nets are implemented using the library Deel-Lip1.
Footnote 1: [https://github.com/deel-ai/deel-lip](https://github.com/deel-ai/deel-lip)
### Toy examples from Scikit-Learn
We use two-dimensional toy examples from the Scikit-Learn library (Pedregosa et al., 2011). Results are shown in figure 2. The contour of the decision function are plotted in resolution \(300\times 300\) pixels. The level sets of the classifier are compared against those of One Class SVM (Scholkopf et al., 2001) and Isolation Forest (Liu et al., 2008). We also train a conventional network with Binary Cross Entropy against complementary distribution \(Q_{t}\), and we show it struggles to learn a meaningful decision boundary. Moreover, its Local Lipschitz Constant (Jordan and Dimakis, 2020) increases uncontrollably, as shown in table 7, which makes it prone to adversarial attacks. Finally, there is no natural interpretation of the prediction of the conventional network in terms of distance: the magnitude \(|f(\cdot)|\) of the predictions quickly grows above \(1e-3\), whereas for 1-Lipschitz neural nets, it is approximately equal to the signed distance function \(\mathcal{S}\). We refer to Appendix E for visualizations.
### Anomaly Detection on Tabular datasets
We tested our algorithm on some of the most prominent anomaly detection benchmarks of ODDS library (Rayana, 2016). In this unsupervised setting (like ADBench (Han et al., 2022)) all the examples (normal examples and anomalies) are seen during training, but their true label is unknown. To apply our method, the only hyperparameter needed is the margin \(m\) that we select in the range \([0.01,0.05,0.2,1.]\). For each value, the results are averaged over \(20\) independent runs train/test splits. Following ADBench guidelines and best practices from the AD community, we only compute the AUROC, since this metric is symmetric under label flip. We report the best average in table 2 along baselines from ADBench (Han et al., 2022). As observed by (Han et al., 2022), none of the algorithms clearly dominates the others, because what is considered an anomaly (or not) depends on the context. Among 14 other methods tested, our algorithm ranks \(7.1\pm 3.6/15\), while the best (Isolation Forests) ranks \(4.5\pm 3.2/15\). The experiment shows that our algorithm is competitive with respect to other broadly used baselines. Nonetheless, it brings several additional advantages. First, our algorithm can be seen as a parametric version of kNN for the euclidean distance, which leverages deep learning to avoid the costly construction of structures like a KDTree (Maneewongvatana and Mount, 1999) and the quadratic cost of nearest neighbor search, thereby enabling its application in high dimensions. Second, it provides robustness certificates. We illustrate those two advantages more thoroughly in the next section.
### One Class learning on images
We evaluate the performances of OCSDF for OCC, where only samples of the normal class are supposed to be available. To emulate this setting, we train a classifier on each of the classes of MNIST and Cifar10, and evaluate it on an independent test set in a _one-versus-all_ fashion. We compare our method against DeepSVDD (Ruff et al., 2018), OCSVM (Scholkopf et al., 1999), and Isolation Forests (Liu et al., 2008). Details on the implementation of the baselines can be found in Appendix G. The mean AUROC score is reported in table 3 and averaged over \(20\) runs. It is computed between the \(1,000\) test examples of the target class and the remaining \(9,000\) examples from other classes of the train set (both unseen during training). The histograms of normality scores are given in Appendix G. OCSDF is competitive with respect to other baselines. In addition, it comes with several advantages described in the following.
#### 5.3.1 Certifiable and empirical robustness
For each class, we compute the certified AUROC score of each digit for \(12\) attacks of radii \(\epsilon\in\{8/255,16/255,36/255\}\). None of the concurrent methods can provide certificates against \(12\) attacks: in the work of (Goyal et al., 2020) the attacks are performed empirically (no certificates) with l-\(\infty\) radii. In table 3, we report our certifiable AUROC with various radii \(\epsilon\in\{0,8/25,16/255,36/255,72/255\}\). In figure 3 we report
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline \multirow{2}{*}{Local Lipschitz Constant} & One & Two & Two & Blob & Two \\ & Cloud & Clouds & Blobs & Cloud & Moons \\ \cline{2-6} & 26.66 & 122.84 & 1421.41 & 53.90 & 258.73 \\ \hline \end{tabular}
\end{table}
Table 1: Lower bound on the Local Lipschitz Constant (LLC) of conventional network after \(10,000\) training steps for each toy example. It is the maximum of \(\|\nabla_{x_{i}}f(x_{i})\|\) over the train set.
Figure 2: Contour plots of our method with 1-Lipschitz (LIP) network and \(\mathcal{L}_{m,\lambda}^{\text{thr}}\) (HKR) loss on toy examples of Scikit-learn.
\begin{table}
\begin{tabular}{c c c c|c c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{\(d\)} & \multirow{2}{*}{\#no.+an.} & \multirow{2}{*}{perc.} & \multicolumn{2}{c}{OSCDF} & \multicolumn{2}{c}{Deep} & \multicolumn{2}{c}{OC} & \multicolumn{2}{c}{IF} & PCA & \multicolumn{1}{c}{kNN} & \multicolumn{1}{c}{SOTA} \\ & & & & & SVDD & & SVM & & & & \\ \hline \hline \multicolumn{1}{c|}{Average Rank} & \multicolumn{1}{c}{\(7.1\pm 3.6\)} & \(11.2\) & \(7.5\) & \(4.5\) & \(5.7\) & \(7.8\) & \(4.5\pm 3.2\) (IF) \\ \hline breastw & 9 & 444+239 & \(35\%\) & \((\#10)\)\(82.6\pm 5.9\) & \(65.7\) & \(80.3\) & \(98.3\) & \(95.1\) & \(97.0\) & \(99.7\) (COPD) \\ cardio & 21 & 1,655+176 & \(9.6\%\) & \((\#2)\)\(95.0\pm 0.1\) & \(59.0\) & \(93.9\) & \(93.2\) & \(95.5\) & \(76.6\) & \(95.5\) (PCA) \\ glass & 9 & 205+9 & \(4.2\%\) & \((\#7)\)\(73.9\pm 4.1\) & \(47.5\) & \(35.4\) & \(77.1\) & \(66.3\) & \(82.3\) & \(82.9\) (CBLOF) \\ http (KDDCup99) & 3 & 565,287+2,211 & \(0.4\%\) & \((\#11)\)\(67.5\pm 37\) & \(69.0\) & \(99.6\) & \(99.6\) & \(99.7\) & \(03.4\) & \(99.96\) (IF) \\ Ionosphere & 33 & 225+126 & \(36\%\) & \((\#7)\)\(80.2\pm 0.1\) & \(50.9\) & \(75.9\) & \(84.5\) & \(79.2\) & \(88.3\) & \(90.7\) (CBLOF) \\ Lymphography & 18 & 142+6 & \(4.1\%\) & \((\#8)\)\(96.1\pm 4.9\) & \(32.3\) & \(99.5\) & \(99.8\) & \(99.8\) & \(55.9\) & \(99.8\) (CBLOF) \\ mammography & 6 & 10,923+260 & \(2.32\%\) & \((\#6)\)\(86.0\pm 2.5\) & \(57.0\) & \(84.9\) & \(86.4\) & \(88.7\) & \(84.5\) & \(90.7\) (ECOD) \\ musk & 166 & 2,965+97 & \(3.2\%\) & \((\#8)\)\(92.6\pm 20.\) & \(43.4\) & \(80.6\) & \(99.99\) & \(100.0\) & \(69.9\) & \(100.0\) (PCA) \\ Optdigits & 64 & 5,066+150 & \(3\%\) & \((\#12)\)\(51.0\pm 0.9\) & \(38.9\) & \(54.0\) & \(70.9\) & \(51.7\) & \(41.7\) & \(87.5\) (CBLOF) \\ Pima & 8 & 500+268 & \(35\%\) & \((\#12)\)\(60.7\pm 1.0\) & \(51.0\) & \(66.9\) & \(72.9\) & \(70.8\) & \(73.4\) & \(73.4\) (kNN) \\ satimage-2 & 36 & 5,732+71 & \(1.2\%\) & \((\#3)\)\(97.9\pm 0.4\) & \(53.1\) & \(97.3\) & \(99.2\) & \(97.6\) & \(92.6\) & \(99.8\) (CBLOF) \\ Shuttle & 9 & 45,586+3,511 & \(7\%\) & \((\#4)\)\(99.1\pm 0.3\) & \(52.1\) & \(97.4\) & \(99.6\) & \(98.6\) & \(69.6\) & \(99.6\) (IF) \\ smtp (KDDCup99) & 3 & 95,126+30 & \(0.03\%\) & \((\#4)\)\(87.1\pm 3.5\) & \(78.2\) & \(80.7\) & \(89.7\) & \(88.4\) & \(89.6\) & \(89.7\) (IF) \\ speech & 400 & 3,625+61 & \(1.65\%\) & \((\#15)\)\(46.0\pm 0.2\) & \(53.4\) & \(50.2\) & \(50.7\) & \(50.8\) & \(51.0\) & \(56.0\) (COF) \\ thyroid & 6 & 3,679+93 & \(2.5\%\) & \((\#5)\)\(95.9\pm 0.0\) & \(49.6\) & \(87.9\) & \(98.3\) & \(96.3\) & \(95.9\) & \(98.3\) (IF) \\ vertebral & 6 & 210+30 & \(12.5\%\) & \((\#4)\)\(48.6\pm 2.6\) & \(36.7\) & \(38.0\) & \(36.7\) & \(37.0\) & \(33.8\) & \(53.2\) (DAGMM) \\ vowels & 12 & 1,406+50 & \(3.4\%\) & \((\#2)\)\(94.7\pm 0.7\) & \(52.5\) & \(61.6\) & \(73.9\) & \(65.3\) & \(97.3\) & \(97.3\) (kNN) \\ WBC & 30 & 357+21 & \(5.6\%\) & \((\#10)\)\(93.6\pm 0.1\) & \(55.5\) & \(99.0\) & \(99.0\) & \(98.2\) & \(90.6\) & \(99.5\) (CBLOF) \\ Wine & 13 & 119+10 & \(7.7\%\) & \((\#5)\)\(81.5\pm 0.9\) & \(59.5\) & \(73.1\) & \(80.4\) & \(84.4\) & \(45.0\) & \(91.4\) (HBOS) \\ \hline \hline \end{tabular}
\end{table}
Table 2: AUROC score for tabular data, averaged over \(20\) runs. The dimension of the dataset is denoted by \(d\). In the **Anomaly Detection protocol (AD)** we use all the data (normal class and anomalies) for training, in an unsupervised fashion. The “#no.+an.” column indicates part of normal (no.) and anomalous (an.) data used during training for each protocol. SOTA denominates the best sore ever reported on the dataset, obtained by crawling relevant literature, or ADBench (Han et al., 2022) results (table D4 page 37). We report the rank as (#rank) among 14 other methods.
\begin{table}
\begin{tabular}{c|c c c c c|c c c} \hline \hline MNIST & OCSDF & OCSDF & OCSDF & OCSDF & OC & Deep & IF \\ Certificates & \(\epsilon=0\) & \(\epsilon=8/255\) & \(\epsilon=16/255\) & \(\epsilon=36/255\) & \(\epsilon=72/255\) & \(\epsilon=0\) & \(\epsilon=0\) & \(\epsilon=0\) \\ \hline mAUROC & \(\mathbf{95.5\pm 0.4}\) & \(93.2\pm 2.1\) & \(89.9\pm 3.5\) & \(78.4\pm 6.4\) & \(57.5\pm 7.5\) & \(91.3\pm 0.0\) & \(\mathbf{94.8\pm 0.9}\) & \(92.3\pm 0.5\) \\ \hline digit 0 & \(\mathbf{99.7\pm 0.1}\) & \(99.6\pm 0.2\) & \(99.5\pm 0.2\) & \(99.0\pm 0.6\) & \(96.2\pm 3.0\) & \(98.6\pm 0.0\) & \(98.0\pm 0.7\) & \(98.0\pm 0.3\) \\ digit 1 & \(\mathbf{99.8\pm 0.0}\) & \(99.7\pm 0.0\) & \(99.6\pm 0.1\) & \(99.2\pm 0.3\) & \(96.2\pm
the empirical AUROC against l2-PGD attacks with three random restarts, using stepsize \(\zeta=0.25\)\(\epsilon\) like Foolbox (Rauber et al., 2020). These results illustrate our method's benefits: not only does it come with robustness certificates that are verified empirically, but the empirical robustness is also way better than DeepSVD, especially for Cifar10.
#### 5.3.2 Visualization of the support
OCSDF can be seen as a parametric version of kNN, which enables this approach in high dimensions. As a result, the decision boundary learned by the classifier can be materialized by generating adversarial examples with algorithm 1. The forward computation graph is a classifier based on optimal transport, and the backward computation graph is an image generator. Indeed, the back-propagation through a convolution is a transposed convolution, a popular layer in the generator of GANs. Overall, the algorithm behaves like a WGAN (Arjovsky et al., 2017) with a single network fulfilling both roles. This unexpected feature opens a path to the explainability of the One Class classifier: the support learned can be visualized without complex feature visualization tools. In particular, it helps identify failure modes.
#### 5.3.3 Limitations
We also tested our algorithm on the Cats versus Dogs dataset by training on Cats as the One Class and using Dogs as OOD examples. On this high-dimensional dataset, the AUROC barely exceeds \(55.0\%\). This suggests that the SDF is relevant for tabular and simple image datasets (e.g MNIST) but fails to generalize in a meaningful way in higher dimensions. This is expected: the euclidean norm is notoriously a bad measure of similarity in pixel space. A more thorough discussion of the limitations is left in Appendix H.
## 6 OCSDF for implicit shape parametrization
Our approach to learning the SDF contrasts with the computer graphics literature, where SDF is used to obtain the distance of a point to a surface (here defined as \(\partial\mathcal{X}\)). Indeed, SDFs are usually learned in a supervised fashion, requiring the ground truth of \(l2\) distance. This is classically achieved using Nearest-Neighbor algorithms, which can be cumbersome, especially when the number of points is high. Efficient data structures (e.g., using K-dtrees (Maneewong-vatana and Mount, 1999) or Octrees (Meagher, 1980)) can mitigate this effect but do not scale well to high dimensions. Instead, OCSDF learns the SDF solely based on points contained in the support.
To illustrate this, we use models from Princeton's ModelNet10 dataset (Wu et al., 2015). We sample \(n=2048\) within each mesh to obtain a 3d point cloud. We fit the SDF on the point cloud with the same hyperparameters as the tabular experiment. We use Lewiner marching algorithm (Lewiner et al., 2003) from scikit-image (Van der Walt et al., 2014), on a \(200\times 200\times 200\) voxelization of the input. We plot the mesh reconstructed with Trimesh (Dawson-Haggerty et al., 2019). The results are highlighted in figure 5. We chose the first percentile of \(\mathbb{E}_{x\sim\mathbb{P}_{X}}[f(x)]\) as the level set of the iso-surface we plot. We compare our results visually against a baseline from (Boltcheva and Levy, 2017) implemented in Graphite (Levy, 2022) that rebuilds the mesh solely from the point cloud (without extrapolation). We highlight that \(n=2048\) is considered low resolution; hence many details are expected to be lost. Nonetheless, our method recovers the global aspect of the shape more faithfully, smoothly, and consistently than the other baseline.
## 7 Conclusion
We proposed a new approach to One-Class Classification, OCSDF, based on the parametrization of the Signed Distance Function to the boundary of the known distribution using a 1-Lipschitz neural net. OCSDF comes with robustness guarantees and naturally tackles the lack of negative data inherent to OCC. Finally, this new method extends OCC beyond Out-of-Distribution detection and allows ap
Figure 4: Examples from algorithm 1 with \(T=64\) and \(\eta=1\).
Figure 5: Visualization of the SDF (3rd row) from sparse point clouds of size \(2048\) (2nd row) sampled from ground truth meshes (1st row) with _Trimesh_ library, against a baseline from _Graphite_ tool that attempts to reproduce the meshes solely from a point cloud (4th row). The SDF exhibits better extrapolation properties and provides smooth surfaces.
Figure 3: Empirical Mean AUROC on all classes against adversarial attacks of various radii in One Class setting, using default parameters of FoolBox (Rauber et al., 2020).
plying it to surface parametrization from point clouds and generative visualization.
|
2303.10875 | Hardware-Aware Graph Neural Network Automated Design for Edge Computing
Platforms | Graph neural networks (GNNs) have emerged as a popular strategy for handling
non-Euclidean data due to their state-of-the-art performance. However, most of
the current GNN model designs mainly focus on task accuracy, lacking in
considering hardware resources limitation and real-time requirements of edge
application scenarios. Comprehensive profiling of typical GNN models indicates
that their execution characteristics are significantly affected across
different computing platforms, which demands hardware awareness for efficient
GNN designs. In this work, HGNAS is proposed as the first Hardware-aware Graph
Neural Architecture Search framework targeting resource constraint edge
devices. By decoupling the GNN paradigm, HGNAS constructs a fine-grained design
space and leverages an efficient multi-stage search strategy to explore optimal
architectures within a few GPU hours. Moreover, HGNAS achieves hardware
awareness during the GNN architecture design by leveraging a hardware
performance predictor, which could balance the GNN model accuracy and
efficiency corresponding to the characteristics of targeted devices.
Experimental results show that HGNAS can achieve about $10.6\times$ speedup and
$88.2\%$ peak memory reduction with a negligible accuracy loss compared to
DGCNN on various edge devices, including Nvidia RTX3080, Jetson TX2, Intel
i7-8700K and Raspberry Pi 3B+. | Ao Zhou, Jianlei Yang, Yingjie Qi, Yumeng Shi, Tong Qiao, Weisheng Zhao, Chunming Hu | 2023-03-20T05:18:31Z | http://arxiv.org/abs/2303.10875v2 | # Hardware-Aware Graph Neural Network Automated Design for Edge Computing Platforms
###### Abstract
Graph neural networks (GNNs) have emerged as a popular strategy for handling non-Euclidean data due to their state-of-the-art performance. However, most of the current GNN model designs mainly focus on task accuracy, lacking in considering hardware resources limitation and real-time requirements of edge application scenarios. Comprehensive profiling of typical GNN models indicates that their execution characteristics are significantly affected across different computing platforms, which demands hardware awareness for efficient GNN designs. In this work, HGNAS is proposed as the first Hardware-aware Graph Neural Architecture Search framework targeting resource constraint edge devices. By decoupling the GNN paradigm, HGNAS constructs a fine-grained design space and leverages an efficient multi-stage search strategy to explore optimal architectures within a few GPU hours. Moreover, HGNAS achieves hardware awareness during the GNN architecture design by leveraging a hardware performance predictor, which could balance the GNN model accuracy and efficiency corresponding to the characteristics of targeted devices. Experimental results show that HGNAS can achieve about \(10.6\times\) speedup and \(88.25\%\) peak memory reduction with a negligible accuracy loss compared to DGCNN on various edge devices, including Nvidia RTX3080, Jetson TX2, Intel i7-8700K and Raspberry Pi 3B+.
Hardware-Aware, Graph Neural Network, Neural Architecture Search, Edge Devices +
Footnote †: This work is supported by National Natural Science Foundation of China (Grant No. 62072019). Corresponding authors are Jianlei Yang and Chunming Hu, Email: [email protected], [email protected].
## I Introduction
Graph neural networks (GNNs) have achieved state-of-the-art performance in various graph representation scenarios, including node classification [1], link prediction [2], recommendation [3] and 3D representation learning on point clouds [4]. Due to the powerful feature extraction capabilities on topological structures, GNNs have become a popular strategy for handling point cloud data, such as DGCNN [5]. Moreover, with the rising popularity of 3D scanning sensors in edge devices, such as mobiles, unmanned aerial vehicles, etc., it is natural to investigate how to design efficient GNN models for edge applications.
The most challenging issue of GNNs is their hungry demands for hardware resources. Especially for resource-constrained edge devices, GNNs usually exhibit low hardware efficiency and _Out-Of-Memory_ (OOM) problem, limiting their potential to handle real-world edge applications. For instance, given a frame of point cloud data, DGCNN performs KNN operations in each GNN layer for graph construction, resulting in poor hardware efficiency on edge platforms. As shown in Fig. 1, the examined GNN's inference latency and peak memory usage for handling point cloud classification tasks continues increasing rapidly as the number of processed points grows. Specifically, for the default setting with \(1024\) points, DGCNN [5] needs more than **4 seconds** to handle a single frame on Raspberry Pi (i.e. about \(1/4\) fps, frame per second), which is intolerable for real-time applications. In addition, processing the involved graphs with more than \(1536\) points will cause OOM problems during DGCNN inference. Therefore, deploying GNNs on edge devices is extremely challenging with resource constraints.
Several handcrafted approaches have been proposed to tackle the prohibitive inference cost of GNNs in point cloud processing [6, 7, 8]. Even though they could achieve notable speedups by simplifying the GNN computations, manual optimization is difficult to adapt for different computing platforms, regarding the huge design space and various hardware characteristics [7]. As a very promising approach, hardware-aware neural architecture search (NAS) could explore optimal architectures automatically according to given objectives, which is independent of human experience and avoids manual labor. By leveraging this approach, several works have made encouraging progress in developing efficient DNN models on edge devices [9, 10]. More specifically, we aim to exploit hardware-aware NAS for efficient GNNs derived from point cloud applications on edge platforms.
Although several NAS frameworks have been introduced for GNNs [11, 12], most of them have rarely considered the hardware constraints and latency requirements inspired by real-world edge applications. In this paper, HGNAS is proposed as an efficient hardware-aware graph neural architecture search framework for point cloud applications on edge computing platforms. Such a hardware awareness is achieved by integrating hardware efficiency as a partial objective when performing the single-path one-shot evolutionary search. Given the targeted edge devices, HGNAS automatically explores the optimized graph neural architectures by guaranteeing both accuracy and efficiency under hardware constraints (i.e. inference latency, model size, etc.). One straightforward approach to achieve hardware awareness is to deploy the generated architecture candidates on the targeted devices and feedback the measured data during exploration. However, the required tremendous real-time hardware measurement leads to very inefficient exploration process since the cost of edge inference and communication is often unbearable. As such, a more elegant way to evaluate GNN hardware efficiency for different platforms is warranted. In addition, the redundant operations within the layer-wise GNN design space and lengthy search times also
Fig. 1: Comparison between our approach and DGCNN. Inference latency and peak memory usage on Raspberry Pi are illustrated on the left. Inference speed and memory efficiency improvement across different edge devices are illustrated on the right.
the efficient GNNs exploration process.
To address these challenges, our proposed HGNAS framework leverages novel hardware-aware techniques for procuring both desirable GNN performance and search efficiency. We spot that GNN architectures themselves are also graphs that can be well represented by GNNs. Such a concept is put into good use by drawing on the idea of _Use GNN to perceive GNNs_. Therefore, HGNAS can effectively perceive the latency of GNN candidates through a well-polished GNN-based predictor. Furthermore, we develop a fine-grained design space composed of basic operations to unleash the potential of GNN computations. HGNAS also adopts an efficient multi-stage hierarchical search strategy by dividing the GNN design space, reducing the search time to a few GPU hours. As shown in Fig. 1, HGNAS has been proven superior in both latency and peak memory usage across various edge devices. Specifically, HGNAS on resource-constrained Jetson TX2 could achieve the same level of inference latency compared with DGCNN on powerful RTX3080, providing \(47\times\) (i.e. \(350\)W vs. \(7.5\)W) power efficiency improvement without accuracy loss. In summary, the contributions of this work are listed as follows:
* **Framework.** To the best of our knowledge, HGNAS is the first NAS framework to perform efficient graph neural architecture search for resource-constrained edge devices. HGNAS can automatically explore GNN models with multiple objectives (accuracy, latency, etc.) for targeted platforms.
* **Hardware awareness.** To the best of our knowledge, HGNAS is also the first work to achieve hardware performance awareness for GNNs across edge devices. The proposed GNN-based predictor can perceive the latency of a given GNN architecture on the targeted platform in milliseconds.
* **Evaluation.** We have conducted extensive experiments on point cloud classification tasks. By deploying HGNAS-designed models on various edge devices, we achieve about \(10.6\times\) speedup and \(88.2\%\) peak memory reduction with a negligible accuracy loss.
## II Related Works and Motivation
In combination with previous related works, we introduce several important observations on motivating efficient GNN explorations for edge computing platforms.
**Observation 1**: Redundant operations bring significant overhead.
Generally, GNN computations follow the _message mass_ (MP) paradigm, which consists of _graph sampling_, _aggregation_, and _combination_, as shown in Fig. 2(a). By reusing operation results across GNN layers, we observe that redundancy may exist within the MP paradigm. As illustrated in Fig. 2(b), reusing the sampled results between GNN layers does bring some negligible accuracy loss, but could provide a considerable boost to computational efficiency. It indicates that redundant operations usually bring significant overhead, which is one of the major obstacles in optimizing computational efficiency.
**Motivation 2**: Decoupling the MP paradigm by leveraging fine-grained GNN design space.
The above results demonstrate that building GNNs by stacking generic GNN layers together will inevitably bring redundant operations. The most straightforward approach for this problem is optimizing GNN models through large amounts of ablation studies and analysis. Typically some redundant sampling operations are eliminated in DGCNN for achieving better efficiency [6]. However, these hand-crafted approaches heavily rely on design experience and are non-reproducible for different tasks. Inspired by the success of designing scalable GNN models by decoupling GNN paradigm [14, 15], GNN layers are decoupled into operations for building a fine-grained design space in an operation-wise manner. With the freedom offered by fine-grained design space, various configurations (i.e. aggregation range, operation order, etc.) could be generated by learning rather than manual laboring.
**Observation 2**: Exploring fine-grained design space is costly.
Compared with layer-wise design space, the fine-grained design space takes operations as candidate options instead of layers, greatly expanding the scope of exploration. For example, the backbone of DGCNN consists of four GNN layers, and each layer includes three basic operations, i.e. _sample_, _aggregate_, _combine_. Therefore, to cover most DGCNN architectures, the fine-grained design space must contain at least \(12\) positions, \(3\) candidate operations, and \(N\) functions for each operation (details in Sec. III-B). As a result, there are staggering \((3N)^{12}\) options to explore in the design space, which significantly increases the exploration complexity.
**Motivation 3**: Speedup exploration by search space simplification.
The exploration complexity demands a very efficient search strategy to navigate through the huge options. Intuitively, some studies [16] have proposed to shrink the design space for better search efficiency. Unfortunately, the boost in efficiency comes with the price of potentially discarding some valued choices. In reality, the contribution of different layers towards the overall accuracy within a model may vary greatly [17]. The representational power of front layers usually contributes more to the whole GNNs [7]. Hence, they simplify the latter parts of DGCNN to obtain better model performance. All of these inspire us to improve exploration efficiency by dividing the search space and guiding the search tendencies at different positions.
**Observation 3**: The same GNN model may behave differently on various computing platforms.
Detailed execution time breakdown of DGCNN is illustrated in Fig. 3 for different platforms. These results are obtained by PyTorch Profiler [18]. For Nvidia RTX3080 and Jetson TX2, the _sample_ operation occupies the majority of execution time. This is because GPUs are better at handling compute-intensive matrix operations, and not so good at memory-intensive graph sampling operations. For Intel i7-8700K, _aggregate_ and _sample_ take up most of the execution time, which is caused by a massive number of irregular memory accesses. As the results indicate, the execution of DGCNN on the above platforms belongs to I/O-bound. However, on the Raspberry Pi, due to the limited resources, all three phases occupy relatively large
Fig. 3: Execution time breakdown of DGCNN across different platforms.
Fig. 2: (a) Typical GNN Pipeline with MP paradigm. (b) Accuracy and latency comparison when performing sampled results reuse among different DGCNN layers on ModelNet40 [13] dataset and RTX3080 platform.
proportions of execution time, which makes the execution process also compute-bound. They demonstrate that the same GNN model may result in various execution characteristics across computing platforms.
**Motivation :** Perceive the hardware sensitivities of GNNs across different computing platforms.
Due to differences in hardware architectures and available resources, GNNs that are computationally efficient on GPUs may not be sufficient on other platforms. Moreover, the hybrid execution mode of GNNs, which consists of both memory-intensive and computation-intensive operations, poses great challenges for effectively perceiving GNNs hardware sensitivities [19]. Some works attempt to improve hardware efficiency by analytical estimation for hardware/algorithm co-design [20]. However, these approaches introduce obvious accuracy drop and are difficult to extend to other platforms [21]. Although real-time measurement could really capture hardware behaviors, its tremendous overhead is intolerable for efficient exploration. Therefore, an efficient and scalable hardware-aware approach is highly demanded for GNNs exploration. Inspired by [9], a GNN-based end-to-end hardware performance predictor is integrated to efficiently and accurately perceive GNNs hardware performance across various platforms.
## III HGNAS Framework
### _Problem Definition and HGNAS Overview_
In this paper, we aim to co-optimize the accuracy and hardware efficiency of GNNs on edge devices. Given a target edge device \(\mathcal{H}\), and hardware constraints \(\mathcal{C}\), the multi-objective optimization process of HGNAS can be formulated as:
\[\arg\max_{\{\mathcal{A},\mathcal{H}\}} \left(\alpha*acc_{val}\left(\mathcal{W}^{*},\mathcal{A}\right)- \beta*lat\left(\mathcal{A},\mathcal{H}\right)\right), \tag{1}\] \[s.t. \mathcal{W}^{*}=\arg\max_{\mathcal{W}}acc_{train}(\mathcal{W}, \mathcal{A})\] \[lat(\mathcal{A},\mathcal{H})<\mathcal{C}\]
where \(\mathcal{W}\) denotes the model weights, \(acc_{train}\) is the training accuracy, \(acc_{val}\) is the validation accuracy, \(lat\) is the inference latency on targeted platform \(\mathcal{H}\), \(\mathcal{A}\) is the GNN architecture candidate, \(\alpha\) and \(\beta\) are the scaling factors used to adjust the optimize propensity between accuracy and latency.
Fig. 4 shows the overview of our HGNAS framework. Given a specific task, a target device, the hardware constraints, and the optimizing metrics, HGNAS will first generate a _fine-grained operation-based design space_ which consists of the _Function Space_ and _Operation Space_. HGNAS then constructs a supernet covering the GNN design space to adopt the one-shot exploration approach. Afterward, HGNAS will explore the hierarchical design spaces based on the proposed _multi-stage hierarchical search strategy_. During exploration, the evaluation result of each candidate architecture is determined by both the accuracy on the validation dataset, and the hardware performance on the target device. The hardware performance of GNNs is provided by the _GNN hardware performance predictor_ integrated in HGNAS. In subsequent sections, we will detail the three main components of HGNAS.
### _Fine-grained Operation-based Design Space_
**The GNN supernet.** To lift the restrictions of the traditional GNN design space, instead of presetting the number of GNN layers, HGNAS builds the design space upon positions for GNN operations to achieve more flexibility. Specifically, for each position in the supernet, there are four basic operations including _connect_, _aggregate_, _combine_, and _sample_, each containing specific attributes. Aside from the operations derived from the MP paradigm, the _connect_ operation, including direct connection and skip-connection, provides more freedom in GNN model construction. For point cloud processing applications, the message type attribute of aggregate operations specifies the construction method of the messages to be aggregated. As shown in Fig. 4, the supernet is organized by stacking all the positions together. In practice, supernet training demands that operations within each position must obtain the same hidden dimension length. HGNAS appends linear transformations to operations incapable of altering hidden dimensions, such as _sample_ and _aggregate_, to ensure dimension alignment among operations. These linear transformations will be disposed of in the finalized architecture to avoid introducing additional overhead.
**The hierarchical design space.** HGNAS detaches attributes from operations to construct an independent _Function Space_ to further decouple the fine-grained design space. The remainder of design space made up of operation types is then used to assemble the _Operation Space_. In addition, these two sub-spaces can be explored separately by leveraging the proposed multi-stage hierarchical search strategy (see Sec. III-C) to reduce exploration complexity. All candidate operations and functions are listed in Tab. I.
### _Multi-stage Hierarchical Search Strategy_
HGNAS divides the search process into two stages corresponding to _Function Space_ and _Operation Space_, in order to reduce the exploration complexity of the fine-grained design space as aforementioned in **Observation 2**. Inspired by [22], the search strategy in HGNAS is based on an evolutionary algorithm (\(EA\)), as illustrated in Alg. 1. In particular, HGNAS first searches a set of functions for GNN supernet from function space. After the optimal function set is determined, HGNAS trains the GNN supernet and performs a multi-objective operation search for all the positions. Finally, the algorithm outputs the top-performing model within the design space. During supernet
Fig. 4: Overview of the HGNAS framework. Oper. and Func. denotes Operation and Function respectively.
training, a random sub-network is generated by sampling operations for each position in the GNN supernet, whose weights \(\mathcal{W}\) are updated via back propagation. Details of the multi-stage search strategy are presented as follows.
**Stage 1: Function Search.** In this stage, HGNAS aims to find a function setting to maximize the supernet accuracy. To further improve the exploration efficiency, HGNAS divides \(N\) positions in GNN supernet into two halves, and shares a set of functions among the _Upper_ half (\(0,...,N/2\)) and another set among the _Lower_ half (\(N/2+1,...,N\)). For a supernet with \(12\) positions, through sharing functions among positions in the decoupled design space, HGNAS can reduce the number of exploration candidates from \(\mathbf{4.2\times 10^{12}}\) to \(\mathbf{1.7\times 10^{7}}\). Finally, an optimal function set \(\mathbb{F}\) is determined for initializing the supernet \(\mathcal{N}_{super}\). Note that fixing \(\mathbb{F}\) in this stage will significantly reduce the complexity of subsequent operation searches.
**Stage 2: Operation Search.** By pre-training the supernet and performing EA-based searches, HGNAS obtains a set of operations that maximizes the objective function value in the _Operation Space_. Benefiting from the one-shot search strategy, supernet training and operation search are divorced to avoid the exorbitant cost of sub-network retraining during the search. To improve the search efficiency and meet the hardware constraints on the targeted device, HGNAS evaluates the candidate architectures based on the proposed hardware performance predictor (see Sec. III-D) during the search. Only the architectures that meet the hardware constraints will be further evaluated for the accuracy metric. Specifically, the objective function during the operation search is formulated as:
\[\mathcal{F}_{obj}(\mathcal{C})=\left\{\begin{array}{l}0\\ \alpha*acc_{val}-\beta*lat,\end{array}\right.,\begin{array}{l}if\quad lat \geq\mathcal{C}\\ if\quad lat<\mathcal{C}\end{array}\right. \tag{3}\]
### _GNN Hardware Performance Predictor_
To meet the efficiency requirements on targeted devices, we build a GNN hardware performance predictor that can efficiently learn the relationship between GNN architectures and hardware efficiency. Specifically, its execution process consists of the following phases: _graph construction_, _node feature generation_, and _inference latency prediction_, as shown in Fig. 5.
**Graph construction.** During this phase, HGNAS abstracts GNN architectures into directed graphs as the input of the GNN predictor. The nodes in these architecture graphs represent inputs, outputs, and operations, while the edges represent the dataflow within the GNN architecture. In practice, accurate prediction of hardware efficiency requires both candidate model architecture and the graph property of the input dataset, which GNN execution highly depends on. However, the plain abstraction of the original GNN architectures is too sparse for the predictor to obtain enough structural features, while lacking the necessary information on input data. Hence, HGNAS introduces a global node connected with all nodes in the graph to improve the graph connectivity. The input data information is also encoded into the global node for better prediction accuracy.
**Node feature generation.** For an operation node, the node feature consists of the operation type and its corresponding function. Specifically, HGNAS encodes these two components into a 7-dimensional and a \(9\)-dimensional one-hot vector respectively, and concatenates the results to represent the node feature. For input and output nodes, HGNAS assigns them with a zero vector. For the global node, HGNAS encodes the input graph data properties (number of nodes, density, etc.) into a \(16\)-dimensional vector as the global node feature.
**Latency prediction.** To avoid the over-smoothing problem often induced by deeper GNNs on small-scale graphs (i.e., the abstracted architecture graph), the predictor consists of only three GCN layers [1] and a multi-layer perceptron (MLP). The inputs of the predictor are information on the target device, adjacency matrix, and node features. Specifically, the GCN layers utilize the sum aggregator with hidden dimensions of \(256\times 512\times 512\). The three layers in MLP with hidden dimensions of \(256\times 128\times 1\), are followed by a LeakyReLU function for generating a scalar prediction of latency. As the architecture graphs normally contain no more than a few dozen of nodes, the overhead brought by latency prediction is mostly negligible. For instance, the GNN-based predictor can predict the latency of a candidate architecture for a target edge device within milliseconds on the Nvidia RTX3080.
## IV Experiments
### _Experimental Setup_
**Baselines and datasets.** For evaluating HGNAS, we consider three baselines: the popular point cloud processing model DGCNN [5], and two manually optimized methods on the DGCNN architecture [6, 7]. Our experiments are conducted on the public benchmark ModelNet40 [13] for classification task with \(1024\) points, and following the default hyperparameter settings in [7]. In addition, all the experimental and profiling results are developed on PyTorch Geometric (PyG) framework [23], taking the average results of \(10\) runs.
**HGNAS settings.** We assign \(12\) positions for the GNN supernet to cover DGCNN architectures. During the design space exploration, the
Fig. 5: Latency prediction of a candidate model for the target device.
max iteration is set to \(1000\), while the population size of \(EA\) is set as \(20\). Searching and training of HGNAS are both conducted on Nvidia V100 GPU. During function search and operation search, the number of GNN supernet training epochs is set as \(50\) and \(500\), respectively.
**Predictor settings.** The predictor is trained for \(250\) epochs on 30K randomly sampled architectures (\(21\)K for training and \(9\)K for validation) in our fine-grained design space, with labels obtained from measurement results on various edge devices. Mean absolute percentage error (MAPE) was used as the loss function during training.
**Edge devices.** We employ four edge devices for comparing HGNAS and competitors: (1) Nvidia RTX3080, (2) Intel i7-8700K, (3) Jetson TX2 with \(8\)GB memory, (4) Raspberry Pi 3B+ with a Cortex-A5 processor and \(1\)GB memory. The latency and peak memory usage are obtained by deploying the GNN models on the above devices for inference using the PyG framework.
### _Exploration by HGNAS_
The models designed by HGNAS are comprehensively compared with baselines in terms of model size, accuracy, latency, and peak memory.
#### V-B1 Accuracy vs. Latency
Fig. 6 reports the exploration results of HGNAS. The ideal solution is located in the top-left corner, indicated by a star. The green points named Device_Acc (e.g. RTX_Acc) are the optimal architectures designed for the targeted device by HGNAS with no loss of accuracy, while the red points named Device_Fast allow \(1\%\) accuracy loss. The results show that HGNAS consistently maintains a better performance frontier (higher accuracy and lower latency) on various devices, which is guaranteed by the accurate hardware performance prediction of the candidate architecture during the search. By setting the scaling factors, HGNAS can easily achieve the tradeoff between hardware efficiency and task accuracy. Specifically, when \(\alpha/\beta\) is smaller, the search results are more in favor of lower latency than higher accuracy. Conversely, when \(\alpha/\beta\) is larger, the search results tend to emphasize more on accuracy.
#### V-B2 HGNAS over Existing Graph Neural Architectures
As shown in Tab. II, compared with the baselines, GNN models designed by HGNAS have better hardware efficiency across all edge computing platforms with similar accuracy. Such remarkable results are due to the hardware awareness incorporated during exploration. Compared to DGCNN, HGNAS achieves up to \(10.6\times\), \(10.2\times\), \(7.5\times\) and \(7.4\times\) speedup, while reducing \(88.1\%\), \(31.6\%\), \(88.2\%\), \(43.7\%\) peak memory usage across four devices. Moreover, HGNAS on the resource-constrained Jetson TX2 attains the same hardware efficiency as DGCNN on the high-performance Nvidia RTX3080 platform, and reduces peak memory usage by \(86.8\%\). The above results clearly demonstrate that flexibility offered by the fine-grained design space in HGNAS enables the pursuit of exceptional GNN computation efficiency on edge. For a fairer comparison with manual optimizations in [6, 7], we adopt their reported accuracy, inference speedup, and memory reduction as the baseline on the GPU platform. For other edge platforms, we reproduce these baselines based on PyG, due to the lack of pre-trained models and evaluation results. These results show that GNN models designed by HGNAS have outperformed the manual optimizations across all platforms, benefiting from the accurate prediction of hardware performance during model explorations.
### _Evaluation on GNN Predictor_
As shown in Fig. 8, our proposed hardware performance predictor achieves high prediction accuracy across various platforms. Specifically, the MAPE of prediction results on RTX3080, Intel i7-8700K, and Jetson TX2 is about \(6\%\), while this metric is around \(19\%\) on Raspberry Pi due to fluctuations in latency measurement results. The accuracy of GNN predictor is more than 80% across devices with a 10% error bound. In practice, the GNN predictor obtains better performances for models with a faster inference speed, which assists HGNAS in more efficient design exploration.
Fig. 6: Comparison between existing models and HGNAS across various devices.
Fig. 7: The trade-off between accuracy and speedup (compare to DGCNN) by scaling factor \(\alpha\) and \(\beta\).
### _Ablation Studies_
**Predictor vs. real-time measurement.** Fig. 9(a) shows the HGNAS search process leveraging GNN predictor or real-time measurement on Intel CPU and Nvidia GPU platforms. The results demonstrate that the GNN predictor can effectively improve the search efficiency, as the models searched with both methods obtain similar performances. In particular, our predictor will play a crucial role when the real-time measurement is impossible (e.g., on Jetson TX2 and Raspberry Pi).
**Multi-stage vs. one-stage search strategy.** As shown in Fig. 9(b), exploring with the traditional one-stage search strategy would often be entangled in the huge fine-grained design space. In contrast, the multi-stage hierarchical search strategy greatly accelerates the exploration process, with the capability of finding an optimal GNN architecture within a few GPU hours.
### _Insight from GNNs Designed by HGNAS_
Fig. 10 provides the visualization of GNNs designed by HGNAS. Note that the adjacent KNN operations will be merged during execution due to duplicate graph construction. The results clearly show that the hardware-efficient architectures designed by HGNAS are closely associated with the characteristics of the target device, which are consistent with the characterization of GNN models in **Observation 3**. For example, as KNN occupies the majority of execution time on RTX3080 and Jetson TX2, GNN models designed for these devices would comprise fewer valid KNN operations. Moreover, the optimal model for Intel CPU has fewer aggregate operations, and models designed for Raspberry Pi tend to simplify each operation.
## V Conclusions
In this paper, we propose HGNAS, the first hardware-aware framework to explore efficient graph neural architecture for edge devices. HGNAS can automatically search for optimal GNN architectures that maximize both task accuracy and computation efficiency. HGNAS leverages the multi-stage hierarchical search strategy and GNN hardware performance predictor to efficiently explore the fine-grained GNN design space. Extensive experiments show that GNN models generated by HGNAS consistently outperform SOTA GNNs, achieving about \(10.6\times\) speedup and \(88.2\%\) peak memory reduction across various edge platforms. We believe that HGNAS has made pivotal progress in bringing GNNs to real-life edge applications.
|
2308.14949 | Low-bit Quantization for Deep Graph Neural Networks with
Smoothness-aware Message Propagation | Graph Neural Network (GNN) training and inference involve significant
challenges of scalability with respect to both model sizes and number of
layers, resulting in degradation of efficiency and accuracy for large and deep
GNNs. We present an end-to-end solution that aims to address these challenges
for efficient GNNs in resource constrained environments while avoiding the
oversmoothing problem in deep GNNs. We introduce a quantization based approach
for all stages of GNNs, from message passing in training to node
classification, compressing the model and enabling efficient processing. The
proposed GNN quantizer learns quantization ranges and reduces the model size
with comparable accuracy even under low-bit quantization. To scale with the
number of layers, we devise a message propagation mechanism in training that
controls layer-wise changes of similarities between neighboring nodes. This
objective is incorporated into a Lagrangian function with constraints and a
differential multiplier method is utilized to iteratively find optimal
embeddings. This mitigates oversmoothing and suppresses the quantization error
to a bound. Significant improvements are demonstrated over state-of-the-art
quantization methods and deep GNN approaches in both full-precision and
quantized models. The proposed quantizer demonstrates superior performance in
INT2 configurations across all stages of GNN, achieving a notable level of
accuracy. In contrast, existing quantization approaches fail to generate
satisfactory accuracy levels. Finally, the inference with INT2 and INT4
representations exhibits a speedup of 5.11 $\times$ and 4.70 $\times$ compared
to full precision counterparts, respectively. | Shuang Wang, Bahaeddin Eravci, Rustam Guliyev, Hakan Ferhatosmanoglu | 2023-08-29T00:25:02Z | http://arxiv.org/abs/2308.14949v1 | # Low-bit Quantization for Deep Graph Neural Networks with Smoothness-aware Message Propagation
###### Abstract.
Graph Neural Network (GNN) training and inference involve significant challenges of scalability with respect to both model sizes and number of layers, resulting in degradation of efficiency and accuracy for large and deep GNNs. We present an end-to-end solution that aims to address these challenges for efficient GNNs in resource constrained environments while avoiding the oversmoothing problem in deep GNNs. We introduce a quantization based approach for all stages of GNNs, from message passing in training to node classification, compressing the model and enabling efficient processing. The proposed GNN quantizer learns quantization ranges and reduces the model size with comparable accuracy even under low-bit quantization. To scale with the number of layers, we devise a message propagation mechanism in training that controls layer-wise changes of similarities between neighboring nodes. This objective is incorporated into a Lagrangian function with constraints and a differential multiplier method is utilized to iteratively find optimal embeddings. This mitigates oversmoothing and suppresses the quantization error to a bound. Significant improvements are demonstrated over state-of-the-art quantization methods and deep GNN approaches in both full-precision and quantized models. The proposed quantizer demonstrates superior performance in INT2 configurations across all stages of GNN, achieving a notable level of accuracy. In contrast, existing quantization approaches fail to generate satisfactory accuracy levels. Finally, the inference with INT2 and INT4 representations exhibits a speedup of \(5.11\times\) and \(4.70\times\) compared to full precision counterparts, respectively.
graph neural networks; quantization; oversmoothing in GNNs; large-scale graph management; scalable machine learning +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †:
recommender systems which commonly employ graph neural networks.
Unlike conventional applications of quantization, GNNs present unique challenges due to their intrinsic characteristics, which are not, effectively, addressed by the current methods. (i) The process of neighborhood aggregation in GNNs can lead to significant variance in high in-degree node embeddings, thereby exacerbating the quantization error, especially in low-bit cases (Shi et al., 2017). (ii) As GNNs deepen, they tend to experience the "oversmoothing" issue where each embedding loses its discriminative information due to the repeated, unregulated message passing (Kirshick et al., 2017). It is important to understand if this problem remains or is aggravated with the introduction of model quantization. Thus, while reducing GNN size and enabling compressed processing are pivotal for performance efficiency, addressing oversmoothing is crucial to ensure accuracy, especially in deeper models.
While recent studies (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) have delved into GNN quantization, the problem is far from being solved and there is no effective solution for low-bit quantization that scales for deeper GNNs. Our paper underscores this challenge, revealing that state-of-the-art GNN quantization methods undergo significant degradation at low bit counts (INT4 and INT2). This is more pronounced in deeper GNNs, due to accumulated layer-by-layer quantization errors. We aim to address these intricacies and develop an end-to-end solution.
Our solution involves a quantizer that learns the quantization ranges (QLR) along with a skewness-aware bitwise truncation (BT*) mechanism. Additionally, we introduce a smoothness-aware message propagation scheme (SMP) to counter the oversmoothing issue in quantized models. This quantizer determines an optimized, data-aware learnable range grounded in the input data distribution, thereby minimizing model redundancy. It is shown to retain its effectiveness with low-bit representations, which makes it apt for large deep GNNs. The skewness-aware truncation embedded within the quantizer improves the accuracy particularly in low-bit (INT2) scenarios. Our message propagation scheme aims to mitigate oversmoothing in deep GNNs by constraining the layer-wise shifts in similarities among neighboring nodes. Furthermore, we prove that by using SMP, the quantization error can be suppressed to a bound. Finally, we demonstrate the efficiency and accuracy of our solution through node classification accuracy on quantized GNN models.
Experimental results demonstrate improvements over the state-of-the-art approaches across various performance measures and workloads. Specifically, our quantizer (QLR) demonstrates remarkable advancements in low-bit quantization, outperforming existing quantization methods while resulting in reduced model sizes. For deeper GNNs, our SMP method delivers more accurate classification compared to other deep GNN approaches both in full-precision and quantized versions. The low-bit quantized SMP, using QLR, achieves greater improvement over alternative deep quantized GNN approaches with the help of the quantization error bound with SMP. BT* improves node classification accuracy on large datasets with INT2 representation, making it comparable to INT8 accuracy. We also show that the INT2 quantization model can yield an inference speedup of 5.11 \(\times\) compared to the full-precision model.
## 2. Related Work
Quantization has been commonly employed for neural network (NN) models (Wang et al., 2019). NN training is bottlenecked by high memory requirements to handle large data involving intermediate results and feature maps (Beng et al., 2019). NNs can be trained with low precision using dynamic fixed point arithmetic (Chen et al., 2019).
Quantization for neural networks can be performed during or after training. The post-training approaches quantize weights or activation of neural networks on a pre-trained full-precision model (Chen et al., 2019; Wang et al., 2019; Wang et al., 2019). Their low-bit quantization performance incurs significant accuracy degradation. The quantization-aware training aims to avoid this performance degradation (Chen et al., 2019; Wang et al., 2019). A useful technique is to expose errors from the quantization operation to the forward pass at model training and use straight-through estimator(STE) to compute the gradients (Chen et al., 2019). Banner et al. (Banner et al., 2019) provide a theoretical analysis showing considerable room for quantization under Gaussian weight assumption leading to 8-bit DNNs with comparable accuracy. The success of quantization has led to binary NNs (BNN) drastically reducing computation and memory requirements using hardware-supported bitwise operations with strong precision performances (Wang et al., 2019). The efficacy of high-order bit representations, involving bitwise truncation applied to 32-bit word embeddings, has been demonstrated in previous studies (Chen et al., 2019).
Pioneering work on learning node representations (Wang et al., 2019) has been followed by variants of architectures, utilizing convolutions (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) and autoencoder structures (Chen et al., 2019; Wang et al., 2019). GNN based representations have been used for various analytics tasks, including node similarity search (Chen et al., 2019), link prediction (Wang et al., 2019), and entity disambiguation (Wang et al., 2019). Solutions for scaling GNNs mostly focus on distributed processing (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Scalability challenges on large graphs have also been studied in the context of memory optimizations (Wang et al., 2019) and scalable processing (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019).
GNN quantization have started to receive attention in recent years. Tailor et al. (Tailor et al., 2019) propose quantization-aware training for GNNs, where high in-degree nodes are selected for full-precision training while all other nodes are converted to INT8/INT4. This can achieve reasonable accuracy especially on INT8 models. Huang et al. (Huang et al., 2019) employ product quantization to compress input data but do not address the more challenging task of quantization of parameters. A recent GNN quantization approach (Wang et al., 2019) addresses low-bit representation of the weights and input features by learning the parameters that are equal with the weight dimension and the number of input nodes, respectively, while leaving the core message propagation unquantized. However, this approach necessitates the learning of parameters that scale proportionally with the number of input nodes, resulting in considerable storage and space overheads. Neural Architecture Search (NAS) is used to span possible quantization levels suggesting an INT4 weight and INT8 activation as an effective strategy for GNNs (Wang et al., 2019). Recent studies adapt binary NN methods for GNNs (Chen et al., 2019; Wang et al., 2019) offering a trade-off between time/space efficiency and classification accuracy. These methods typically either need an additional teacher model for knowledge distillation or learn binary weights for each layer's input message, which require higher storage and computational load than a typical quantization based approach.
Towards addressing oversmoothing in deep GNNs, Liu et al. (Liu et al., 2018) propose Elastic Graph Neural Network with long-range information propagation using \(\ell_{1}\) and \(\ell_{2}\)-based graph smoothing. APPNP (Liu et al., 2018) addresses the oversmoothing with a propagation scheme based on an approximation of personalized PageRank. Zhu et al. (Zhu et al., 2019) proposed low-pass and high-pass filtering kernels which have empirically reduced the effect of oversmoothing. DropEdge (Zhu et al., 2019) aims to address the oversmoothing by dropping a number of edges, which can be interpreted as both a data augmentation method generating random deformed graphs and message passing reducer by sparsifying edge connections. PairNorm (Zhu et al., 2019) quantifies the oversmoothing and proposes a two-step center-and-scale normalization layer to prevent nodes converging to similar representations. Compared to enforcing local smoothness, our method, constrains the layer-wise message propagation to counteract oversmoothing, which achieves performance improvements over the prior approaches as also demonstrated in our experiments.
## 3. Preliminaries and Analysis
We first provide the technical background, covering quantization for GNNs, analysis of quantization errors, and the oversmoothing problem in GNNs.
### GNN Basics
A graph \(\mathcal{G}\) is represented as \(\mathcal{G}=(V,E,\mathbf{X})\), where \(V=\{v_{1},\cdots,v_{n}\}\) is the set of \(n\) nodes \(|V|=n\), \(E\) is the set of all edges, \(\mathbf{H}^{1}=[\mathbf{h}^{1}_{1},\cdots,\mathbf{h}^{1}_{n}]^{\top}\) is the node feature (embedding) matrix for layer \(l\in L\) where \(L\) represents the number of layers in \(\mathcal{G}\), and \(\mathbf{h}_{i}\in\mathbb{R}^{d_{l}}\) is the feature vector for \(v_{i}\in V\) node with initial \(\mathbf{H^{0}}=\mathbf{X}\). The adjacency matrix of \(\mathcal{G}\) is a binary matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\), where \(\mathbf{A}(i,j)=1\) if the edge between nodes \(v_{i}\) and \(v_{j}\) exists (\(e_{ij}\in E\)), and 0 otherwise.
GNNs comprise a sequence of layers with three main functions for each layer: message, aggregate and update. This framework is generally called Message Passing NNs (MPNN) (Kip
value in Equation 6 is represented as \(\hat{U}\), the mean squared error (MSE) between \(U\) and \(\hat{U}\) is given by
(7) \[\begin{split} E[(U-\hat{U})^{2}]=\int_{-\infty}^{\alpha}\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Skewness-aware Bitwise Truncation
A basic approach for INT2 can be simply keeping the most (two) significant bits of the higher precision output (e.g., INT8, INT4) as
\[Q_{b_{2}\gets b_{1}}(U_{q},s_{0})=\lfloor\frac{U_{q}}{s_{0}}\rceil s_{0} \tag{12}\]
where \(b_{1}\) and \(b_{2}\) (\(b_{1}\!\geq\!b_{2}\)) are the number of bits used for quantization, \(b_{2}\!\leftarrow\!b_{1}\) means the \(b_{1}\)-bit quantized representation being truncated into \(b_{2}\) bits; \(U_{q}\) denotes the \(b_{1}\)-bit representation obtained with Equation 8, and \(s_{0}\) is the scale for truncating the low-significant bit representation depending on \(b_{1}\) and \(b_{2}\). \(s_{0}\) can be obtained as
\[s_{0}=\frac{\alpha_{q_{1}}-\beta_{q_{1}}}{\alpha_{q_{2}}-\beta_{q_{2}}} \tag{13}\]
where [\(\alpha_{q_{1}}\), \(\beta_{q_{1}}\)] and [\(\alpha_{q_{2}}\), \(\beta_{q_{2}}\)] are the quantization levels for \(b_{1}\)-bit and \(b_{2}\)-bit quantization, respectively.
While such formulation implicitly assumes the uniformity of \(U\), for GNNs can significantly vary depending on the graph topology. Measures such as kurtosis (\(\kappa\)) and skewness (\(sk\)) (Kal
**Definition 5.1**: (Layer-wise Smoothness) Given a graph \(\mathcal{G}\)=\((V,E,\mathbf{X})\), the \(l\)-th layer-wise smoothness is the change of connected nodes \(\nabla(u,v_{j})\in E\) with a degree normalization from layer \(l\)-1 to layer \(l\).
The layer-wise smoothness can be formulated as
\[\begin{split}\mathbf{S}_{l}=\sum_{(u,v_{j})\in E}\|(\frac{ \mathbf{h}_{i}^{l}}{\sqrt{d_{i}}}-\frac{\mathbf{h}_{j}^{l}}{\sqrt{d_{j}}})-( \frac{\mathbf{h}_{i}^{l-1}}{\sqrt{d_{i}}}-\frac{\mathbf{h}_{j}^{l-1}}{\sqrt{d_ {j}}})\|_{2}^{2}\end{split} \tag{15}\]
Specifically, \(\mathbf{S}_{l}\) can also be represented as
\[\begin{split}\mathbf{S}_{l}&=\sum_{(\alpha,v_{j}) \in E}\|\frac{(\mathbf{h}_{i}^{l}-\mathbf{h}_{i}^{l-1})}{\sqrt{d_{i}}}-\frac{ (\mathbf{h}_{j}^{l}-\mathbf{h}_{j}^{l-1})}{\sqrt{d_{j}}}\|_{2}^{2}\\ &=\mathbf{tr}((\mathbf{H}^{\mathrm{I}}-\mathbf{H}^{\mathrm{I}- \mathbf{I}})^{\top}\tilde{\mathbf{L}}(\mathbf{H}^{\mathrm{I}}-\mathbf{H}^{ \mathrm{I}-\mathbf{I}}))\end{split} \tag{16}\]
where \(\tilde{\mathbf{L}}\) represents the normalized Laplacian matrix, \(\tilde{\mathbf{L}}=\mathbf{I}-\tilde{\mathbf{A}},\tilde{\mathbf{A}}=\mathbf{ D}^{-\frac{1}{2}}\tilde{\mathbf{A}}\mathbf{D}^{-\frac{1}{2}}\), \(\tilde{\mathbf{A}}=\mathbf{I}+\mathbf{A}\), and \(\mathbf{D}_{\tilde{\mathbf{H}}}=d_{i}=\sum_{j}\hat{\mathbf{A}}_{ij}\). The \(\mathbf{tr}(\mathbf{H}^{\top}\tilde{\mathbf{L}}\mathbf{H})\) is the Laplacian regularization to make \(\mathbf{H}\) smooth over graph \(\mathcal{G}\), similarly, the \(l\)-th layer-wise smoothness \(\mathbf{tr}((\mathbf{H}^{\mathrm{I}}-\mathbf{H}^{\mathrm{I}-\mathbf{I}})^{ \top}\tilde{\mathbf{L}}(\mathbf{H}^{\mathrm{I}}-\mathbf{H}^{\mathrm{I}-1}))\) can be explained as smoothing the changes from layer \(l\)-1 to \(l\) over \(\mathcal{G}\).
**SMP Contribution to Quantization.** The quantization error for the \(l\)-th layer representation can be expressed as \(f_{e}^{L}=\|\mathbf{H}^{l}-\mathbf{H}^{l,q}\|_{2}^{2}\), where \(\mathbf{H}^{l,q}\) is the quantized representation of \(\mathbf{H}^{l}\). Accordingly, for the quantized SMP, the smoothness constraint can be written as \(S_{l,q}=\text{tr}(\mathbf{H}^{l,q}-\mathbf{H}^{l-1,q})^{\top}\tilde{\mathbf{L} }(\mathbf{H}^{l,q}-\mathbf{H}^{l-1,q}))\). We can prove \(f_{e}^{L}\) is smaller than a bound as provided in LEMMA 1, which underlines the superiority of SMP in terms of quantization.
Lemma 1 ().: _For the \(l\)-th layer representation \(\mathbf{H}^{l}\), the quantization error is \(f_{e}^{L}\leq l\delta\text{tr}(\mathbf{\Lambda}^{-1})+\|\mathbf{H}^{l}- \mathbf{X}^{q}\|_{2}^{2}\)_
Proof.: The Laplacian matrix \(\tilde{L}\) is eigendecomposable, i.e., \(\tilde{L}=\text{U}\text{A}\text{U}^{\top}\), where \(\mathbf{U}\) is orthogonal matrix (UU\({}^{\top}\)= 1 ). \(S_{l,q}\) can be represented as \(S_{l,q}=\left\lvert\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{U}^{\top}(\mathbf{H} ^{l,q}-\mathbf{H}^{l-1,q})\right\rvert_{2}^{2}\leq\delta\). The derivation process is summarized as follows
(1) \[\|\mathbf{H}^{l,q}-\mathbf{H}^{l}\|_{2}^{2}-\|\mathbf{H}^{l-1,q}- \mathbf{H}^{l}\|_{2}^{2}\leq\|\mathbf{H}^{l,q}-\mathbf{H}^{l-1,q}\|_{2}^{2}\leq\] \[\|\mathbf{H}^{l,q}-\mathbf{H}^{l}\|_{2}^{2}-\|\mathbf{H}^{l-1,q}- \mathbf{H}^{l}\|_{2}^{2}\leq\|\mathbf{H}^{l,q}-\mathbf{H}^{l-1,q}\|_{2}^{2}\leq\] \[\|(\mathbf{H}^{l,q}-\mathbf{H}^{l-1,q})\mathbf{\Lambda}^{\frac{1}{2 }}\|_{2}^{2}\|\mathbf{\Lambda}^{-\frac{1}{2}}\|_{2}^{2}\leq\delta\text{tr}( \mathbf{\Lambda}^{-1})\] (2) \[f_{e}^{L}\leq\delta\text{tr}(\mathbf{\Lambda}^{-1})+\|\mathbf{H} ^{l-1,q}-\mathbf{H}^{l}\|_{2}^{2}\] (3) \[\|\mathbf{H}^{l-1,q}-\mathbf{H}^{l}\|_{2}^{2}\leq\|\mathbf{H}^{l -1,q}-\mathbf{H}^{l-2,q}\|_{2}^{2}+\|\mathbf{H}^{l-2,q}-\mathbf{H}^{l}\|_{2}^{2}\] \[=\delta\text{tr}(\mathbf{\Lambda}^{-1})+\|\mathbf{H}^{l-2q}- \mathbf{H}^{l}\|_{2}^{2},\text{ where }1\leq i\leq l.\] (3) \[f_{e}^{L}\leq l\delta\text{tr}(\mathbf{\Lambda}^{-1})+\| \mathbf{H}^{l}-\mathbf{X}^{q}\|_{2}^{2}\] (5)
## 6. Experiments
This section presents our experiments on benchmark datasets that illustrate the effectiveness of QLR and QLR with BT (BT\({}^{*}\)) quantizers under low-bit settings. We also compare our SMP with comparable deep GNN baselines, highlighting the capability of SMP in addressing the oversmoothing issue.
### Experimental Setup
**Datasets and Baselines.** Our experiments are performed on five datasets, Cora, PubMed, CiteSeer (Zhu et al., 2017), CS (Zhu et al., 2017) and Reddit (Zhu et al., 2017) in a semi-supervised node classification setting. The statistics of the datasets are summarized in Table 1.
We start by comparing QLR against two state-of-the-art GNN quantizers, Degree-Quant (Zhu et al., 2017) and Aggregate-Quant (Zhu et al., 2017), on GCN (Zhu et al., 2017). These experiments are complemented with their respective model sizes. To showcase the effectiveness of SMP for quantization and oversmoothing we compare with 4 comparable deep GNN methods, which are, APPNP (Zhu et al., 2017), DropEdge (Zhu et al., 2017), PairNorm (Zhu et al., 2017) and EMP (Zhu et al., 2017). They are evaluated on 10-layer GNNs with 64 hidden units. Subsequently, we apply BT (BT\({}^{*}\)) within SMP and EMP pipeline to verify its effectiveness on extreme low-bit representations and its scalability with respect to the numbers of layers. Finally, we present the runtime efficiency by measuring the throughput under the representations of various quantization levels.
**Parameter settings.** For APPNP, DropEdge, PairNorm and EMP, we used the optimal parameters provided within their public repositories. For SMP, we set the parameters from the following search space: (1) learning rate (lr) \(\in\) {0.005, 0.008, 0.01, 0.015}; (2) weight decay (wd) \(\in\) {5e\({}^{-4}\), 1e\({}^{-4}\), 5e\({}^{-5}\)}; (3) drop rate \(\in\) {0.8}; (4) \(\mu\) {3, 6, 9}; (5) the initial value of \(\nu\) {1.0} \(\in\) {6} \(\eta_{\lambda}\), 7\(\approx\) {1e\({}^{-5}\),1e\({}^{-6}\)}; (7) \(\delta_{0}\)\(\in\) {0.01, 0.1, 0.5, 1, 2}. Due to a significant difference in magnitude between the gradients of the scaling parameters (y) and other GNN parameters, we have established a distinct search space for the former: learning rate (lr\({}_{Y}\)\(\in\) {0.001, 0.002}) and weight decay (wd\({}_{Y}\)\(\in\) {1e\({}^{-4}\), 5e\({}^{-5}\)}).
We present the average accuracy and standard deviation over 10 random data splits for Cora and CiteSeer, and 5 for PubMed, CS and Reddit. For Reddit dataset, owing to its size, we have employed mini-batch training with a batch_size of 20000. All of the experiments are based on Pytorch (Paszasz et al., 2017) and PyTorch Geometric (Kipip et al., 2018). The experiments are ran on Ubuntu 20.04 with 64GB RAM.
### Comparison with different quantizers
We compare QLR with the state-of-the-art GNN quantization solutions, Degree-Quant and Aggregate-Quant. Results are summarized in Table 2.
We notice that Aggregate-Quant in default maintains a fixed quantization level of INT4 for weights, while having smaller bits for input features. Moreover, it does not quantize the message-passing blocks of GCN, whereas QLR and Degree-Quant quantize all the elements equally. Hence, for fairness, we also add quantizers for its message-passing blocks and removed the INT4 constraint on its model weights. Also important to note that Aggregate-Quant maintains a step size parameter for each node, which can be viewed as an extension of learned step size quantization (LSQ) (Kip et al., 2018), which makes it highly inflexible for inductive tasks. Due to that, it does not support mini-batch training, as the topology of the input graph changes with each mini-batch training iteration.
We observe that performance of QLR significantly outperforms those of its competitors irrespective of the quantization level. The approach of optimizing the quantization range in the backward pass makes QLR more robust and effective, especially in low-bit cases. Aggregate-Quant demonstrates superior performance for CiteSeer when applied to INT8 quantization, which can be courtesy of its significantly larger parameter size. However, the accuracy of low-bit cases degrades significantly when message passing blocks are also quantized fairly. As for Degree-Quant, while it can achieve comparable performance on INT8, it cannot generate expected performance with INT4 and INT2 quantization on Reddit. Due to its mask sampling strategy and low quantization level, one node
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Dataset & Cora & CiteSeer & PubMed & CS & Reddit \\ \hline Nodes & 2708 & 3327 & 19717 & 18333 & 232965 \\ Edges & 5278 & 4552 & 44324 & 81894 & 114848857 \\ Features & 1433 & 3703 & 500 & 6805 & 602 \\ Labels & 7 & 6 & 3 & 15 & 41 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of benchmark datasets
Figure 4. Avg layer-wise smoothness against different epochs (\(\overline{5}\)) for GNNs
sampled to different mini-batches will generate different representations at different batch training, which curbs the overall accurate representation. However, QLR can directly optimize the learnable quantization range based on the observations of subgraphs, hence, it can reduce the comprehensive quantization errors. These further confirm that optimizing the quantization range in QLR enables better preservation of accuracy in low-bit representations.
It is noteworthy that QLR preserves its accuracy results even for INT2 quantization across all datasets, while the alternatives fail to get comparable accuracy. Additionally, QLR even outperforms the full precision (FP) model in INT8 quantization in many cases, showcasing its effectiveness as a noise filter for GNNs.
In Table 3, we report the model sizes of different quantization approaches with varying quantization levels and hidden units (\(d\)). Due to space limit, we only present the model sizes on CS dataset. As there is a native 8-bit support, under constant \(d\), the sizes of INT8 with QLR and Degree-Quant are consistently reduced to approximately one-fourth of the FP counterpart. For smaller bits, however, we pack INT2 and INT4 similar to the process described in (Wang et al., 2018). The size of QLR is slightly larger to that of Degree-quant due to storage of \(s_{Y}\), \(z_{Y}\) and \(\gamma\) parameters. Given the superior accuracy performance of QLR in low-bit settings, its slight increase in model size becomes negligible in comparison. Overall, with QLR and Degree-quant, the INT2 and INT4 model sizes are significantly smaller than their FP counterparts, with reductions of 16x and 8x, respectively. However, the size of Aggregate-Quant is 2\(-\)6 times that of its counterparts, largely due to the dimension and per-node nature of the parameters.
### Comparisons with existing deep GNNs
#### 6.3.1. Node classification with existing deep GNNs
We compare SMP with the existing deep GNNs in terms of both full-precision (FP) and quantized models using QLR. Table 4 presents the classification accuracy results using 10-layer GNN. Notably, SMP consistently outperforms the alternative methods on Cora, CiteSeer and CS with FP models, and slightly lower than that of APPNP on PubMed. SMP improves over EMP by enforcing smoothness at layer-wise message propagation during training and inference.
For quantized models, although INT8 achieves high accuracy close to FP for all methods, INT4 performance of DropEdge and PairNorm drops significantly, rendering it incomparable in some cases, and throws OOM errors on larger datasets. This is primarily due to the complexity of the "backbone" models (Wang et al., 2018). For example, PairNorm, employing a GCN backbone, trains a weight matrix for each layer. Likewise, DropEdge utilizes more intricate backbones such as GCN, ResGCN (Wang et al., 2018), IncepGCN (Wang et al., 2018), and introduces connection perturbations at every layer. Consequently, these factors contribute to higher computational and storage requirements, which further escalate with the depth of the model. On the contrary, EMP, APPNP, and SMP employ much simpler architectures that involve only two weight matrices prior to GNN propagation. As a result, these models have more moderate and scalable requirements, making them more amenable for deep GNN quantization.
the accuracy of INT2-8\({}^{*}\) is, continuously, around 5%-7% higher than that of INT2-8 on CS dataset with respect to varying number of layers. This shows the effectiveness of BT\({}^{*}\) in reaching relatively high accuracy with low-bit representations.
Figure 6 presents the full training process of SMP and EMP with variations of our quantization approach on CS dataset. While SMP and EMP show unstable performance with INT2 representation generated by the basic quantizer, applying skewness-aware BT (BT\({}^{*}\)) contributes improvements that are close to that of INTs quantization. Furthermore, SMP is more robust than EMP, due to the existence of quantization error bound in SMP.
### Inference Speedup
In Table 5, we elucidate the inference times associated with varying quantization levels for the 2-layer GCN and SMP architectures, respectively. These model inferences are conducted on the Reddit dataset. The \(\uparrow\) signifies the inference speedup comparing with FP model. To realize quantized GNN speed improvements across different quantization levels, we leverage the recent Tensor Core-based approach, QGCT (Wang et al., 2017), applied to both GCN and SMP. We observe a notable speedup of 5.11 \(\times\) and 6.44 \(\times\) with SMP and GCN, respectively, in the context of low-bit representation (INT2), in comparison to the FP counterparts. Notably, the speedup for SMP exhibits a slight reduction compared to GCN, attributed to the additional computational overhead of SMP. Remarkably, with the same number of layer (\(L\)), SMP showcases superior accuracy performance relative to GCN. This is exemplified in Table 2 and Figure 5 in the CS dataset with \(L\)=2. Specifically, for SMP, the INT8 and INT4 accuracy outperforms GCN by approximately 2%, while SMP in INT2 mode demonstrates a performance advantage over GCN by up to 13.5%.
## 7. Conclusion
We have introduced an end-to-end solution towards achieving scalable deep GNNs, involving an efficient quantization with learnable ranges, with skewness-aware bitwise truncation, and a smoothness-aware message propagation (SMP) mechanism for efficient training and managing large deep GNNs. The solution reduces the model size and maintains its accuracy for classification even in low-bit representations. The message passing block in training is enforced to have layer-wise smoothness and constrains the changes between neighbor nodes. We formulate it as an additional constraint to a graph denoising optimization function and solved by Lagrange functions with an iterative BDMM algorithm. It aims to mitigate the oversmoothing problem in GNNs and to avoid the performance degradation encountered in low-bit quantization-aware training. We provide an upper bound on the error for the quantized SMP algorithm. Experiments show how the proposed solution achieves significant improvements over the-state-of-the-art approaches, providing a significant reduction in model sizes, an order of magnitude smaller than the full precision (FP) model with comparable accuracy results, and mitigating the oversmoothing problem on benchmark datasets.
###### Acknowledgements.
This work is supported in part by the UK Engineering and Physical Sciences Research Council under Grant No. EP/T51794X/1.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c|}{SMP} & \multicolumn{3}{c|}{SMP-INT8} & \multicolumn{3}{c|}{SMP-INT8} & \multicolumn{3}{c|}{EMP-INT8} & \multicolumn{3}{c}{SMP-INT8} \\ \hline Dataset & FP & INTs & INT4 & INT2 & FP & INTs & INT4 & INT2 & FP & INTs & INT4 & INT2 & FP & INTs & INT4 & FP & INTs & INT4 \\ \hline Cora & 82.91 & 82.92 & 82.69 & 72.69 & 82.59 & 81.58 & 77.93 & 66.20 & 80.78 & 81.82 & 79.68 & 71.28 & 71.20 & 70.21 & – & 76.08 & 79.10 & 78.31 \\ \hline \(\pm\)0.64 & \(\pm\)0.51 & \(\pm\)0.68 & \(\pm\)2.48 & \(\pm\)0.67 & \(\pm\)0.79 & \(\pm\)2.62 & 66.75 & \(\pm\)1.42 & \(\pm\)1.44 & \(\pm\)1.81 & \(\pm\)2.34 & \(\pm\)2.14 & \(\pm\)1.50 & – & \(\pm\)2.86 & \(\pm\)1.00 & \(\pm\)1.01 \\ \hline Citeseer & 71.60 & 71.40 & 69.76 & 65.01 & 70.84 & 71.02 & 67.53 & 61.53 & 70.30 & 68.90 & 68.77 & 63.87 & 51.39 & – & – & 60.07 & 61.15 & 59.60 \\ & \(\pm\)1.26 & \(\pm\)1.58 & \(\pm\)1.63 & \(\pm\)1.26 & \(\pm\)1.20 & \(\pm\)1.49 & \(\pm\)2.68 & \(\pm\)3.65 & \(\pm\)1.53 & \(\pm\)2.19 & \(\pm\)2.01 & \(\pm\)2.24 & \(\pm\)4.16 & – & – & \(\pm\)5.76 & \(\pm\)2.88 & \(\pm\)3.73 \\ \hline PubMed & 79.36 & 79.90 & 87.44 & 75.97 & 78.64 & 78.68 & 76.18 & 74.22 & 79.59 & 79.31 & 78.49 & 73.56 & – & – & – & 75.69 & \(\pm\)2.88 & \(\pm\)3.73 \\ \hline \multirow{2}{*}{CS} & 79.49 & 79.24 & 79.24 & 83.74 & 92.17 & 91.16 & 79.18 & 82.02 & 91.58 & 92.37 & 91.94 & 81.25 & 75.12 & – & – & – & 81.81 & \multirow{2}{*}{OOM} & \multirow{2}{*}{OOM} & \multirow{2}{*}{OOM} & \multirow{2}{*}{OOM} \\ & \(\pm\)0.64 & \(\pm\)0.52 & \(\pm\)0.38 & \(\pm\)1.80 & \(\pm\)0.46 & \(\pm\)0.42 & \(\pm\)0.54 & \(\pm\)1.09 & \(\pm\)0.66 & \(\pm\)0.57 & \(\pm\)0.57 & \(\pm\)0.88 & \(\pm\)4.24 & OOM & OOM & \(\pm\)1.64 & OOM & OOM \\ \hline \multicolumn{10}{l}{- denotes accuracy \(\leq\) 40.00\%,} & \multicolumn{3}{c|}{OOM means ’out-of-memory’} \\ \end{tabular}
\end{table}
Table 4. Classification accuracy of Deep GNN methods (%) on benchmark datasets
Figure 5. Results of SMP and EMP with varying layers
Figure 6. Quantization performance of SMP and EMP on CS
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline & FP & INT8 & INT4 & INT2 & \(\uparrow\) \\ \hline SMP & 178.3 & 46.46 & 3.84 \(\times\) & 37.96 & 4.70 \(\times\) & 34.89 & 5.11 \(\times\) \\ \hline GCN & 156.97 & 34.96 & 4.49 \(\times\) & 27.29 & 5.75 \(\times\) & 24.37 & 6.44 \(\times\) \\ \hline \end{tabular}
\end{table}
Table 5. Inference time (ms) of SMP and GCN in different quantization levels on Reddit dataset |
2302.00967 | Energy Efficiency of Training Neural Network Architectures: An Empirical
Study | The evaluation of Deep Learning models has traditionally focused on criteria
such as accuracy, F1 score, and related measures. The increasing availability
of high computational power environments allows the creation of deeper and more
complex models. However, the computations needed to train such models entail a
large carbon footprint. In this work, we study the relations between DL model
architectures and their environmental impact in terms of energy consumed and
CO$_2$ emissions produced during training by means of an empirical study using
Deep Convolutional Neural Networks. Concretely, we study: (i) the impact of the
architecture and the location where the computations are hosted on the energy
consumption and emissions produced; (ii) the trade-off between accuracy and
energy efficiency; and (iii) the difference on the method of measurement of the
energy consumed using software-based and hardware-based tools. | Yinlena Xu, Silverio Martínez-Fernández, Matias Martinez, Xavier Franch | 2023-02-02T09:20:54Z | http://arxiv.org/abs/2302.00967v1 | # Energy Efficiency of Training Neural Network Architectures: An Empirical Study
###### Abstract
The evaluation of Deep Learning (DL) models has traditionally focused on criteria such as accuracy, F1 score, and related measures. The increasing availability of high computational power environments allows the creation of deeper and more complex models. However, the computations needed to train such models entail a large carbon footprint. In this work, we study the relations between DL model architectures and their environmental impact in terms of energy consumed and CO\({}_{2}\) emissions produced during training by means of an empirical study using Deep Convolutional Neural Networks. Concretely, we study: (_i_) the impact of the architecture and the location where the computations are hosted on the energy consumption and emissions produced; (_ii_) the trade-off between accuracy and energy efficiency; and (_iii_) the difference on the method of measurement of the energy consumed using software-based and hardware-based tools.
**Keywords:**
Green AI, deep learning, neural networks, sustainable software engineering, energy metrics
## 1 Introduction
In recent years, Deep Learning (DL) models have shown great performance in many machine learning-based tasks. The DL-centric research paradigm and the ambition of creating the next state-of-the-art model lead to the exponential growth of model size and the use of larger datasets to train these models, requiring therefore intensive computation that entails a considerable large financial cost and carbon footprint [1]. If this trend continues, greater amounts of energy will be needed to build larger models to achieve ever-smaller improvements, making research progress directly depend on the uncontrolled exploitation of computing resources. In this context, energy consumption is becoming a necessary consideration when designing all types of software [2] and
specifically DL-based solutions. Fortunately, the awareness of aligning DL research with the emergent Green AI movement [3] is growing steadily.
In the DL realm, Convolutional Neural Networks (CNN) have become a well-known architectural approach widely used in areas such as image classification and natural language processing (NLP) [4][5]. Common architectures for CNNs are AlexNet [5], VGGNet [6], GoogleNet [7], and ResNet [8]. CNNs use linear algebra principles, specifically matrix multiplication, to identify patterns. Alternative activation function, parameter optimization, and architectural innovations were the basis of CNN advances. These networks are computationally demanding, requiring graphical processing units (GPU) to train the models. The availability of large amounts of data and the access to more powerful hardware has opened new possibilities for CNN research. Indeed, the evolution of these architectures has shown a trend towards increasingly complex models to solve increasingly complex tasks [9][7][10][11].
In this paper, we investigate the effects of different CNN architectures in the energy efficiency of the model training stage, and the possible relation of energy efficiency with the accuracy of the obtained model. To do so, we focus on one particular application domain, namely computer vision (CV), which has evolved significantly in the last years thanks to the widespread application of CNNs. CV applications are useful in many areas including medical imaging, agriculture monitoring, traffic control systems, sports tracking, and more. This includes a set of challenges such as image classification, object detection, image segmentation, image captioning among others. We center this work in the context of image classification as it is considered the basis for CV problems and CNNs have become the state-of-the-art technique. To perform this task, CNNs extract important features from the images at each convolution level and are completed with some fully connected output nodes for the classification.
This document is structured as follows. Section 2 gives the background and reports the related work on energy consumption of DL systems and Green AI. Section 3 defines the research goal and research questions of our study, as well as the experimental methodology. Section 4 presents the results and gives answers to our research questions. Section 5 reviews findings and discuss their implications and Section 6 summarizes the overall study and delineates future steps.
## 2 Background and Related Work
### Energy measurement
To measure the energy consumption of a computing device there are essentially two kinds of tools: hardware power monitors and energy profilers [12]. Hardware monitors are directly connected to the power source of the component that can be used to monitor the energy consumption of software. Despite being difficult to set up, power monitors are the most accurate strategy to measure energy, although they cannot discriminate what percentage of this consumption comes from a particular thread of execution. The other strategy is using energy profilers, a software-based tool that captures energy data in conjunction to program execution. This allows energy profilers to compute the power consumed by the device, but these calculation rely on estimations. To what extent these
estimations differ from the real consumption is worth to be investigated in order to claim for internal validity of empirical studies on energy efficiency.
Recent work has analyzed the carbon footprint of training deep learning models and advocated for the evaluation of the energy efficiency as an evaluation criterion for research [3]. The number of floating point operations (FLOPs) has been used in the past to quantify the energy footprint of a model [13][14][15], but they are not widely adopted in DL research. And little research has been done regarding the CO\({}_{2}\) emissions of highly expensive computation processes.
### Green AI
The present-days concern on the carbon footprint of increasingly large DL models has been growing. Schwartz et al. [3] advocate for redirecting DL research towards a more environmentally friendly solution known as Green AI. They estimated that computational cost of AI research that aim to obtain state-of-the-art results has increased 300.000x from 2012 to 2018. This is due to the AI community focus on metrics such as accuracy rather than energy efficiency. In this paper, they suggest to report the number of FLOPs required to generate the results as a standard measure of efficiency.
Strubell et al. [1] estimated the carbon emission of training some of the recently successful neural network models for NLP, raising awareness and proposing actionable recommendations to reduce costs of NLP research. They conclude that these trends are not only found in the NLP community, but hold true across the AI community in general.
Recent work by Google and UC Berkeley [16] has estimated the carbon footprint and energy consumption of large neural network training. The paper proposes strategies to improve the energy efficiency and CO\({}_{2}\) emissions. They reported that by carefully choosing processor, hardware and data centers, it is possible to reduce the carbon footprint of deep neural networks by up to 100-1000 times.
When it comes to DL frameworks, Georgiou et al. [17] reported clear difference between energy consumption and run-time performance of two of the most popular DL frameworks, Pytorch and Tensorflow. The study showed that DL frameworks show significant model-sensitivity and that current documentation of the frameworks has to be improved. Also, Creus et al. studied how to make greener DL-based mobile applications. The studies showed that it is possible to build optimized DL-based applications varying the number of paramenters of CNNs [18, 19].
Regarding greener DL models in applications domains, we can find advances for greener DL-based solutions as well. For instance, for weed detection, Ofori et al. combined the mobile-sized EfficientNet with transfer learning to achieve up to 95.44% classification accuracy on plant seedlings [20], and model compression achieving 62.22% smaller in size than DenseNet (the smallest-sized full-sized model) [21]. Moreover, for pig posture classification, Witte et al. reported the YOLOv5 model achieving an accuracy of 99,4% for pig detection, and EfficientNet achieving a precision of 93% for pig posture classification [22].
With respect to the aforementioned works, there is a clear need for further research to build greener DL-based solutions and models. Our work delves into CNN architectures and their energy efficiency.
Research methodology
### Goal, research questions and hypotheses
We formulate our research goal according to the Goal Question Metric (GQM) guidelines [23] as follows: Analyze _convolutional neural networks architectures_ with the purpose of _measuring their energy efficiency_ with respect to _the model training_ from the point of view of _the AI practitioner_ in the context of _creating an image classification model for computer vision_. This research goal is operationalised into three research questions (RQ):
**RQ1**: Does the CNN architecture have an impact on energy consumption? According to the background introduced in Section 2, we will respond this RQ using two different measures which generate a null hypothesis each: H.1.1.0: There is no difference in energy consumed and emissions produced during training varying the CNN model architecture. H.1.2.0: There is no difference in FLOPs required during training varying the CNN model architecture.
**RQ2**: What is the relationship between CNN accuracy and the energy needed to train the model?
**RQ3**: What are the differences between software-based and hardware-based methods of measuring the energy efficiency of a model?
With RQ1 we aim to provide a comparative analysis of the measures specified for some of the best known CNN architectures for image classification, namely VGG16, VGG19 and ResNet50, to determine the correlation between architecture complexity and energy consumption. We carry out this analysis on two of the most popular image datasets for this task: MNIST and CIFAR-10.
With RQ2 we want to compare the trade-off between energy efficiency and the accuracy obtained from each model configuration. If little accuracy gains require much more computation, one can argue that this improvement is only needed when facing critical business cases (e.g., designing life-critical systems). To answer this RQ, we will be using a score ratio introduced by Alyamkin et al. [24], where they introduce this new metric to compare models: \(Score=Accuracy/Energy\). Energy in our case will refer to the energy consumption of the training in kWh.
With RQ3 we pretend to study how to measure the energy efficiency of a model's training while exploring two different ways of measurement (see Section 2): the use of wattmeters (hardware-based measurement) and the use of profilers (software-based estimation). Understanding the internals of these measurement instruments will help researchers to design robust study protocols.
### Study Design
We divide the study into a three-stage pipeline (see Fig. 1): (a) the _Data Management_ stage which includes the collection and preprocessing of the images, (b) the _Modeling and Development_ of the DL components, including the training of the DL model, and (c) the _Research Outputs_, which studies the outputs from the previous phase (e.g., power, energy consumed, accuracy from the models) to answer our RQs.
### Variables
In the following subsections we define the variables of our experimental design grouped into three categories.
#### 3.3.1 Independent variables.
In this study we define two independent variables: (_i_) the CNN architecture, and (_ii_) the measurement instrument.
As defined in Section 3.1, our objective is focused on the energy consumption of training a deep CNN model, and not on the model itself. Therefore we use transfer learning from the following CNN architectures: VGG16, VGG19, and ResNet50. We define the model architecture as a categorical variable that specifies which of the CNN is trained, and the model number of parameters is defined as a numerical variable indicating the complexity of the model.
VGG comes from the Visual Geometry Group from Oxford and it was used to win the ILSVR (ImageNet) competition in 2014 [6].
VGG16 is a 16-layer model, being 13 of them convolutional and the other 3, fully connected. It has 138.4 million parameters. The same size is used for all the kernels in every convolutional layer is used, namely 3x3 kernel with stride = 1 and padding = 1. For maximum pooling, this changes into 2x2 kernel with stride = 2.
VGG19 is a newer version, build upon the same concept as the VGG16, but with 19 layers in total, with 16 convolutional layers and 143.7M million parameters.
ResNet stands for Residual Network and was first introduced in 2015 [8]. The architecture of this NN relies on Residual Blocks, where a residual block is a combination of the original input and an output after convolution and activation function. There are different versions of ResNet having a different number of layers.
ResNet50 is the 50-layer model that has 48 convolution layers and 25.6 million parameters.
The way of measuring the energy consumption that indicates the two options of getting the measurements: an emission profiler and a wattmeter (see Section 3.4 for more details).
Figure 1: Schema of the empirical study.
#### 3.3.2 Dependent variables.
To measure the environmental impact of a model's training process, we will track the computer power and energy consumption during the experiments. We use four numerical variables that measure _(i)_ the emissions in CO\({}_{2}\) equivalents (CO\({}_{2}\)-eq) in kg; _(ii)_ the energy consumed by the infrastructure in kWh; _(iii)_ the number of floating-point operations (FLOP) needed to train the model; and _(iv)_ the validation accuracy of the model obtained.
#### 3.3.3 Other variables.
We use a categorical variable that indicates which dataset is utilized for the model training. The image classification input datasets used in this paper are the following:
The MNIST1 (Modified National Institute of Standards and Technology) handwritten digits [25]. It is a large dataset commonly used in ML for training systems. It consists of 70,000 28 x 28 black and white images with 10 classes: digits from 0 to 9. The images have been normalized and centered in a fixed size and grayscale levels where introduced with anti-aliasing. There are 60,000 images for training and 10,000 for testing.
Footnote 1: [http://yann.lecun.com/exdb/mnist/](http://yann.lecun.com/exdb/mnist/)
The CIFAR-102 (Canadian Institute For Advanced Research) dataset. It is a set of small labeled images for classification dataset which consists of 60,000 32 x 32 colour images in 10 mutually exclusive classes, with 6,000 images per class. There are 50,000 training images and 10,000 test images and the classes are: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. These images are challenging to classify due to the varying lighting condition and angles.
Footnote 2: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
Other categorical variables that should be considered independent but we do not have full control over them due to the cloud provider plan are:
The type of hardware. For RQ1 and RQ2 the computations were performed on 2 x Intel(r) Xeon(r) Processor 2.00 GHz CPUs and 1 x Tesla P100-PCIE-16GB GPU. For RQ3 8the computations were performed on 80 x Intel(r) Xeon(r) E5-2698 v4 @ 2.20GHz CPUs and 8 x Tesla V100-SXM2-32GB GPUs.
The location where the cloud is hosted. Experiments were conducted using Kaggle3 kernels. For RQ1 and RQ2 there were three available locations: Taipei (Taiwan), Oregon (USA), and South Carolina (USA). For RQ3 the infrastructure was located in Ile-de-France (France).
Footnote 3: [https://www.kaggle.com/](https://www.kaggle.com/)
### Data collection
In this section, we respectively report the measures of our study (see Figure 1, (c)), and the instruments used to collect them.
#### 3.4.1 Measures.
To describe the amount of work that is required to train a model we compute the following measures:
**CO\({}_{2}\) emission** is the quantity that we want to minimize directly. These emissions can be calculated as the product between: (i) carbon intensity of the electricity consumed for computation, quantified as kg of CO\({}_{2}\) emitted per kWh of electricity, and (ii) the net power supply consumed by the computational infrastructure, quantified in kWh. Carbon intensity of electricity used is determined by a weighted average of emissions from various energy sources used to generate power, including fossil fuels and renewables. The combination of energy sources is based on the specific location where the computation is hosted.
**Energy consumed** is related to CO\({}_{2}\) emissions, while being independent of time and location. The power supply to the hardware is tracked at frequent time intervals, thus it is highly dependent on the type of hardware utilized.
We executed our experiment three times and we report the median value of energy consumed reported by both the wattmeter and the profiler.
**FLOPs** is the total number of floating-point operations required to execute a computational process. It estimates the amount of work needed for the process as a deterministic measure, computed by defining the cost of two base operations: addition and multiplication. FLOPs can be estimated given a model instance even before starting the training.
To compute the FLOPs required for the training of the model we use the keras-flops4 package for TensorFlow. All code has been developed in Python (version 3.7.12) and the Keras API of TensorFlow5 and all models were trained with a batch size of 32.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Class** & **Name** & **Description** & **Scale** & **Operationalization** \\ \hline Independent & Architecture Type & The deep CNN architecture & nominal & See section 3.3.1 \\ & Measuring instrument & Energy measuring method & nominal & See section 3.3.1 \\ \hline Dependent & Emissions & Carbon dioxide (CO\({}_{2}\)) emissions, expressed as kilograms of CO\({}_{2}\)-equivalents (CO\({}_{2}\)-eq) & numerical & Profiled \\ & Energy consumption & Net power supply consumed during the compute time, measured as kWh & numerical & See measuring method \\ & Floating-point operations & Number of floating point operations per second (FLOP) & numerical & Retrieved from modeling \\ & Accuracy & Validation accuracy obtained after training & numerical & Retrieved from modeling \\ \hline Others & Dataset & The input dataset used to train the models & nominal & See section 3.3.3 \\ & Hardware & GPU and CPU type & nominal & Profiled \\ & Location & Province/State/City where the compute infrastructure is hosted & nominal & Profiled \\ \hline \hline \end{tabular}
\end{table}
Table 1: Independent, dependent and other variables of the study.
#### 3.4.2 Instruments.
To conduct the collection of data and the aforementioned variables, we use two different instruments.
First, we used the CodeCarbon6 profiler: a Python package that enables us to track emissions in order to estimate the carbon footprint of an experiment. Internally, CodeCarbon uses RAPL for measuring the energy consumed by the CPU and RAM, and NVIDIA Management Library (NVML) for the energy consumption of the GPU. CodeCarbon also presents the the total energy consumed, which corresponds to the sum of the energy consumption from the CPU, GPU and RAM. The package logs the data of each experiment into an _emissions.csv_ file. The logged fields we are interested in are: duration of the compute (in seconds), emissions as CO\({}_{2}\)-equivalents (in kg), and energy consumed (in kWh).
Footnote 6: [https://github.com/mlco2/codecarbon](https://github.com/mlco2/codecarbon)
Second, for responding RQ3, we replicated the experiment on a machine connected to a wattmeter, therefore being able to compare the energy consumption obtained using both a wattmeter and a profiler. We used a wattmeter from the OmegaWatt vendor, which is able to collect up to 50 measurements per second of power directly from the power supply units.
### Data analysis
In RQ1, we divided the analysis into two different parts, considering two variables: model architecture and input data. In each part we assessed the energy consumption on all the dependent variables (CO\({}_{2}\)-equivalent emissions, energy consumed, and FLOPs). Within each part, we followed an identical procedure: (1) use violin and box plots to illustrate the distributions for each response variable, comparing between datasets and CNN architectures; (2) assess the correlation coefficient between independent and dependent variables; (3) assess the statistical significance (i.e., \(p\)-value) of the findings.
We used a point-biserial correlation coefficient to assess the correlation between dependent variables and the input data. Point-biserial correlation is a correlation coefficient used when we have a dichotomous and a continuous variable. It ranges from \(-1\) to \(+1\), where \(-1\) indicates a perfect negative association, \(+1\) indicates a perfect positive association and 0 indicates no association.
To assess the dependent variables with respect to the type of architecture we used Kruskal-Wallis test. Kruskal-Wallis test by rank is a non-parametric alternative to one-way ANOVA test, which extends the two-samples Wilcoxon test in the situation where there are more than two groups. A significant Kruskal-Wallis test indicates that at least one sample stochastically dominates one other sample.
In RQ2, we assess the trade-off between accuracy and energy consumption with the \(Score=Accuracy/Energy\) (see Section 3.1 for more details). We can easily compare the scores between between experiments by sorting them.
For RQ3, we compared the energy consumption as collected in two different ways: a wattmeter and CodeCarbon. The relationship between the two methods is assessed by computing the Spearman's rank correlation coefficient.
## 4 Results
In this section we discuss the quantitative results in response to the RQs and hypotheses presented in 3.1. The entire analysis was conducted using R language.
Table 2 contains the summary of the different experiment configurations and its characteristics.
### Does CNN architecture have an impact on energy consumption? (RQ1)
Fig. 2 shows the violin plot of the correlation between the energy consumed (in kWh) and the emissions produced (in CO\({}_{2}\)-eq in kg) with the different experiments grouped by input dataset (CIFAR10 and MNIST) and by CNN architecture (VGG16, VGG19 and ResNet50). The boxplots show that the median of both emissions and energy consumed using the CIFAR10 images is lower than using the MNIST images. We see that regarding the type of architecture, both VGG models report similar emissions and consumed energy, that are lower with respect to ResNet50. In Fig. 3 have the violin plots for the number of FLOPs required to train the model. In this case we see that using the MNIST dataset and the VGG19 model requires more FLOPs. Furthermore, the same procedure described in section 3.5 for the location variable. Fig. 4 shows the violin plot and box plots grouped by location. Emissions from Taiwan (Taipei City) were higher than the other two cities located in the United States (Oregon and South Carolina). Regarding the energy consumed, the three locations do not show difference.
Table 3 shows the results of computing point-biserial correlation coefficient between the input dataset and the response variables. In this case we take CIFAR10 as the base dataset, meaning that a negative value of the correlation coefficient indicates that the variables are inversely related. We see that both the consumed energy during training and the number of FLOPs are strongly correlated to the input data. Both coefficients are negative, meaning that to train with the CIFAR10 dataset required less energy and FLOPs with respect to using the MNIST dataset.
Table 4 shows the statistic significance (Kruskal-Wallis test \(p\)-value) between the architecture of the model and the response variables. In summary, there is statistical significance to accept that there is correlation between the emissions produced in CO\({}_{2}\)-eq
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline
**Architecture** & **Data** & \begin{tabular}{l} **Image** \\ **Size** \\ \end{tabular} & \begin{tabular}{l} **Dataset** \\ **Size** \\ \end{tabular} & **Depth** & \begin{tabular}{l} **Total** \\ **Parameters** \\ \end{tabular} & \begin{tabular}{l} **Trainable** \\ **Parameters** \\ \end{tabular} & \begin{tabular}{l} **Total** \\ **FLOPs** \\ \end{tabular} &
\begin{tabular}{l} **Trainable Layer** \\ **FLOPs** \\ \end{tabular} \\ \hline VGG16 & CIFAR10 & 32x32 & 60k & 16 & 33,6M & 18,9M & 21.3 G & 1.21 G \\ & MNIST & 48x48 & 70k & 16 & 33,6M & 18,9M & 46.3 G & 1.21 G \\ \hline VGG19 & CIFAR10 & 32x32 & 60k & 19 & 38,9M & 18,9M & 26.7 G & 1.21 G \\ & MNIST & 48x48 & 70k & 19 & 38,9M & 18,9M & 58.6 G & 1.21 G \\ \hline ResNet50 & CIFAR10 & 32x32 & 60k & 50 & 48,8M & 25,2M & 6.68 G & 1.61 G \\ & MNIST & 48x48 & 70k & 50 & 73,9M & 50,3M & 16.3 G & 3.22 G \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiment characteristics. Dataset size includes both train and test sets; the split proportion can be found in section 3.3.1. Depth refers to the topological depth of the network, which includes activation layers, batch normalization layers, etc.
and energy consumption with the type of architecture.
Finally, table 5 shows the statistical significance (Kruskal-Wallis test \(p\)-value) between the location where the computation was hosted and the emissions and energy consumed. The \(p\)-values of the test statistic show that there is relation between the location and the emissions produced to train the model, but not with the energy consumed.
### What is the relationship between model accuracy and the energy needed to train the model? (RQ2)
Table 6 presents the accuracy obtained from a model with a particular architecture trained on a dataset in a particular location, the energy consumed for training that model, and the Score which correspond to the ratio between the mentioned accuracy and energy.
We first analyze the score by location (as allow us to compare energy consumed by the same hardware on different models).
In Oregon and in South Carolina, we observe that a model with architecture VGG19 trained on CIFAR10 produces the highest Score (12.64). VGG19 has a lower energy consumption at the expense of having a lower accuracy compared to its smaller version VGG16. The score metric allows to quantify the trade-off between energy and accuracy.
In Taipei, the model with VGG16 architecture has a higher accuracy than VGG19 wih higher energy consumtion (as happened in the previous two locations). The increase of energy consumption on Taipei compared to the other locations leads to lower score
Figure 2: Violin-plots for the total emissions and energy consumed with the input dataset and CNN architecture.
for VGG19.
On the contrary, we observe a different trend when we analyze the models trained on MNIST at Oregon and South Carolina: VGG16 has the highest Score value and, at the same time, the highest accuracy and lowest energy.
The reason why the levels of energy consumption change for each location given identical specifications and experiments is due to how CodeCarbon estimates the net carbon intensity. For each location, the proportion of energy derived from fossil fuels and low-carbon sources are approximated using the international energy mixes derived from the United States' Energy Information Administration's Emissions & Generation Resource Integrated Database (eGRID). This approximation is done by examining the share of total primary energy produced and consumed for each country in the dataset and determining the proportion of energy derived from different types of energy sources (e.g., coal, petroleum, natural gas and renewables).
By choosing the architecture with highest Score, we obtain either (i) an improvement in both accuracy and energy efficiency (e.g., models trained using MNIST dataset), or (ii) an improvement in energy efficiency with a detriment (small such in the case on CIFAR10 on South Carolina) on accuracy.
Figure 4: Violin-plots for the total emissions and energy consumed with the location where the compute infrastructure is located.
Figure 3: Violin-plots for the total FLOPs with the input dataset and CNN architecture.
What are the differences between software-based and hardware-based methods of measuring the energy efficiency of a model? (RQ3)
Table 7 shows the median energy consumption obtained using a wattmeter and a profiler. All the values are expressed in kWh.
We observe that the energy consumption returned by the wattmeter is larger than the total energy consumed reported by the profiler in the same amount of time, going from 42% to 46%. We provide two possible explanations for this difference. First of all, a profiler is not analog to a power meter. The profiler we use, CodeCarbon, is based on RAPL, which uses a software power model which estimates energy usage by using hardware performance counters and I/O models. On the contrary, the wattmeter does not estimate the consumption; it actually reports samples of power consumed by the devised connected to it, and from those samples we computed the energy consumed.
Secondly, the total energy computed from the wattmeter also includes the energy consumed by all components from devices connected to the wattmeter (e.g., cache memories, hard disks). On the contrary, the profiles computes the total energy based on estimation from three components: CPU, GPU and RAM consumption. Nevertheless, we observe a correlation between the energy consumed reported by the wattmeter and by the profiler: Even the energy values are different, the correlation computed using Spearman gives a rho equals to 0.94, which means a strong correlation.
For example, the architecture ResNet50 on MNIST is, according with both wattmeter
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Variable** & \(p\)**-value** & **Assessment** \\ \hline Emissions (CO\({}_{2}\)-eq) & \(<0.001\) & Significant \\ Energy Consumed (kWh) & \(<0.001\) & Significant \\ FLOPs & \(0.1561\) & Not significant \\ \hline \hline \end{tabular}
\end{table}
Table 4: Statistical significance assessment for Kruskal-Wallis test for correlation between the architectures and the emission rate, consumed energy and FLOPs. This tests responds to the hypotheses H.1.1.0 and H.1.2.0 from section 3.1.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Variable** & **Corr. Coef.** & **Assessment** \\ \hline Emissions (CO\({}_{2}\)-eq) & -0.0605 & Weak corr. \\ Energy Consumed (kWh) & -0.4135 & Strong corr. \\ FLOPs & -0.6233 & Strong corr. \\ \hline \hline \end{tabular}
\end{table}
Table 3: Assessment for point-biserial correlation coefficient between input datasets and the emission rate, consumed energy and FLOPs.
and profiler, the configuration that consumes more energy, while VGG16 on CIFAR is the configuration that consumes less.
## 5 Discussion
Our results show that the selection of different CNN architectures for image classification and dataset size affect the energy consumption as well as the Score (accuracy/energy). This is mainly due to the duration of the training: the larger the number of parameters to train, the longer it will take and consequently the larger the energy consumption. Also, in terms of carbon footprint, the location of the computing infrastructure plays an important role because of the sources of power. These results indicate these two factors as promising to achieve greener DL solutions. Specifically, our RQ2 shows the potential of optimized learning processes requiring less input (data efficient DL) without degrading the quality of the output. With respect to related work, it becomes necessary to explore DL methods dealing with lower volumes such as transfer learning and model compression for more computationally efficient models. Ofori et al. have several works showing that models with pre-trained weights outperform state-out-the-art CNNs [20, 21].
Furthermore, profilers are a good estimation of the real energy consumption. The energy profiles are based on software metrics, and provide an estimation of the energy consumed during the training of a model.
Consequently, the energy values obtained with software-based tools are not as precise as those that can be computed using a hardware-based device such as a wattmeter. However, this study shows that there is a strong correlation between the energy reported by the wattmeter and the energy reported by a profiler. Meaning that, profilers are a cheap (no additional hardware is required), easy (few lines of code are required) and reliable way to compare the energy consumption at the expense of some precision in the calculation.
In this study, we have compared the trade-off between the performance of a model in terms of accuracy and in terms of energy consumption. With this, we have seen that by choosing models that are more energy efficient we are compromising the accuracy of the model, and that little gains of improvement require much more computation. By considering one factor over the other when considering models, one can argue that only when facing critical cases (e.g., medical imaging) these improvements in the model's
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Variable** & \(p\)**-value** & **Assessment** \\ \hline Emissions (CO\({}_{2}\)-eq) & \(<0.001\) & Significant \\ Energy Consumed (kWh) & \(0.6509\) & Not significant \\ \hline \hline \end{tabular}
\end{table}
Table 5: Statistical significance assessment for Kruskal-Wallis test for correlation between the infrastructure locations and the emission rate and consumed energy.
performance are needed.
The outcomes of this study have provided insight to the process of training a ML model as a 'one-time' operation. However, the energy concerns are raised when we start to include ML in the development and operation chains to Machine Learning Operations (MLOps). As the training and deployment of ML models are automated procedures that are re-trained, updated and maintained in cycle.
### Limitations
We faced several threats to validity of our study, for which we took mitigation actions as described below.
**Number of executions**. A single execution in a given configuration may always suffer from some malfunctioning. Therefore, we executed our experiment three times per each configuration and took the mean and median.
**Location**. We could not select the locations beforehand; they were random as Kaggle selected internally the servers used. However, this randomness did not interfere with the execution of the study and the analysis of its results.
**Generalization**. Our results apply for the CV domain, even for two particular datasets used, and cannot be generalized beyond this point without further studies.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Location** & **Data** & **Architecture** & **Accuracy** & **Energy** & **Score** \\ \hline Oregon & CIFAR10 & VGG16 & 0.6189 & 0.0583 & 10.63 \\ & & VGG19 & 0.6018 & 0.0493 & **12.64** \\ & & ResNet50 & 0.3021 & 0.1057 & 4.11 \\ & MNIST & VGG16 & 0.9429 & 0.0879 & **11.02** \\ & & VGG19 & 0.9395 & 0.0932 & 10.44 \\ & & ResNet50 & 0.8858 & 0.1893 & 7.64 \\ \hline S.Carolina & CIFAR10 & VGG16 & 0.6167 & 0.0667 & 9.26 \\ & & VGG19 & 0.6157 & 0.0574 & 10.88 \\ & & ResNet50 & 0.1 & 0.1224 & 1.17 \\ & MNIST & VGG16 & 0.9459 & 0.0920 & 10.42 \\ & & VGG19 & 0.9384 & 0.1137 & 8.26 \\ & & ResNet50 & 0.8883 & 0.2171 & 6.36 \\ \hline Taipei & CIFAR10 & VGG16 & 0.6191 & 0.0567 & 10.99 \\ & & VGG19 & 0.6147 & 0.0637 & 9.80 \\ & & ResNet50 & 0.2169 & 0.1347 & 2.48 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Scores of the different experiment configurations. Accuracy: validation accuracy from last epoch of training. Energy: Kilowatt per hour. Score = Accuracy/Energy.
**Reliability**. We observed that CodeCarbon yielded occasionally as output negative values of energy consumption. We conjecture that it can be caused by a bug in the CodeCarbon tool. Nevertheless, to avoid such values impact on our findings, we report _median_ energy consumption (recall we execute three times each experiment), which means that extreme values (such as those negative) are discarded. Moreover, we observe that the executions of the ResNet50 architecture trained with CIFAR10 at South Carolina, returned an accuracy of 0.1, and that value did not change along the training process. That could be caused by the malfunction of the Keras platform at that time and location.
## 6 Conclusions and future work
In this paper, we have studied three different CNN architectures over two large image classification datasets in order to empirically evaluate the impact of the experimental design in the energy efficiency of the training process.
Each training session was evaluated with respect to three efficiency metrics: CO\({}_{2}\) emissions produced, total energy consumed and number of FLOPs needed. Overall, we gathered statistical evidence of relations between all of the aforementioned variables.
In detail, we have gained statistical evidence that the carbon emissions and the energy consumed by a computational process such as the training of a CNN is related to the experimental design regarding the neural network architecture. We also seen that the impact of the computations can be affected by factors that can be hardly controlled by researchers when engaging in deep learning research, such as the location where the cloud is hosted.
It is important that the progress of DL research towards better performing models also consider reducing the computational cost of the training when designing the architecture.
Our future work spreads over several dimensions. We plan to replicate the study to other domains, e.g. natural language processing, where other DL architectures may
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Data**} & \multirow{2}{*}{**Archit.**} & \multirow{2}{*}{
\begin{tabular}{c} **Watt.** \\ **(kWH)** \\ \end{tabular} } & \multicolumn{4}{c}{**CodeCarbon (kWH)**} \\ \cline{3-6} & & & **CPU** & **GPU** & **RAM** & **TOTAL** \\ \hline MNIST & VGG16 & 2.25 & 0.04 & 0.77 & 0.41 & 1.21 \\ & VGG19 & 2.54 & 0.09 & 0.85 & 0.47 & 1.40 \\ & ResNet50 & 3.03 & 0.05 & 1.08 & 0.59 & 1.72 \\ CIFAR10 & VGG16 & 1.48 & 0.06 & 0.52 & 0.28 & 0.86 \\ & VGG19 & 1.73 & 0.05 & 0.61 & 0.32 & 0.98 \\ & ResNet50 & 1.70 & 0.01 & 0.64 & 0.35 & 0.99 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Energy consumption obtained using a wattmeter and a profiler, expressed in kWh. For the profiler, we present the energy consumption and the total reported by the profiler.
prevail. Also, we aim at further elaborating the results of RQ2 to further extend this notion of score for evaluating models and to address in detail when to stop trading-off the amount of carbon footprint of the model for more accuracy we want to continue studying the variables that are taken into account in the computation of the score. With this, we aim to provide guidance in the creation of future models to obtain better results with less energy consumption.
## Acknowledgments
This work has been supported by the Spanish project PID2020-117191RB-I00 funded by MCIN/AEI/10.13039/501100011033.
## Data availability
The replication package is available on: [https://zenodo.org/badge/latestdoi/503292169](https://zenodo.org/badge/latestdoi/503292169)
|
2310.12800 | Exploring Graph Neural Networks for Indian Legal Judgment Prediction | The burdensome impact of a skewed judges-to-cases ratio on the judicial
system manifests in an overwhelming backlog of pending cases alongside an
ongoing influx of new ones. To tackle this issue and expedite the judicial
process, the proposition of an automated system capable of suggesting case
outcomes based on factual evidence and precedent from past cases gains
significance. This research paper centres on developing a graph neural
network-based model to address the Legal Judgment Prediction (LJP) problem,
recognizing the intrinsic graph structure of judicial cases and making it a
binary node classification problem. We explored various embeddings as model
features, while nodes such as time nodes and judicial acts were added and
pruned to evaluate the model's performance. The study is done while considering
the ethical dimension of fairness in these predictions, considering gender and
name biases. A link prediction task is also conducted to assess the model's
proficiency in anticipating connections between two specified nodes. By
harnessing the capabilities of graph neural networks and incorporating fairness
analyses, this research aims to contribute insights towards streamlining the
adjudication process, enhancing judicial efficiency, and fostering a more
equitable legal landscape, ultimately alleviating the strain imposed by
mounting case backlogs. Our best-performing model with XLNet pre-trained
embeddings as its features gives the macro F1 score of 75% for the LJP task.
For link prediction, the same set of features is the best performing giving ROC
of more than 80% | Mann Khatri, Mirza Yusuf, Yaman Kumar, Rajiv Ratn Shah, Ponnurangam Kumaraguru | 2023-10-19T14:55:51Z | http://arxiv.org/abs/2310.12800v1 | # Exploring Graph Neural Networks for Indian Legal Judgment Prediction
###### Abstract
The burdensome impact of a skewed judges-to-cases ratio on the judicial system manifests in an overwhelming backlog of pending cases alongside an ongoing influx of new ones. To tackle this issue and expedite the judicial process, the proposition of an automated system capable of suggesting case outcomes based on factual evidence and precedent from past cases gains significance. This research paper centres on developing a graph neural network-based model to address the Legal Judgment Prediction (LJP) problem, recognizing the intrinsic graph structure of judicial cases and making it a binary node classification problem. We explored various embeddings as model features, while nodes such as time nodes and judicial acts were added and pruned to evaluate the model's performance. The study is done while considering the ethical dimension of fairness in these predictions, considering gender and name biases. A link prediction task is also conducted to assess the model's proficiency in anticipating connections between two specified nodes. By harnessing the capabilities of graph neural networks and incorporating fairness analyses, this research aims to contribute insights towards streamlining the adjudication process, enhancing judicial efficiency, and fostering a more equitable legal landscape, ultimately alleviating the strain imposed by mounting case backlogs. Our best-performing model with XLNet pretrained embeddings as its features gives the macro F1 score of 75% for the LJP task. For link prediction, the same set of features is the best performing giving ROC of more than 80%.
Keywords:Legal NLP Judgment Prediction Graph Neural Networks
## 1 Introduction
Many cases are pending in the Indian judiciary, and the courts face a meager judge-to-case ratio1, which requires a fair, reliable, and automated system to
predict the verdict of a case. One such task developed recently is the Legal Judgement Prediction (LJP). It aims to predict and suggest the judgement decisions of a court case based on facts and aid in how judgements have passed in previous years. With the access and development of large legal datasets, conducting studies on these tasks becomes imperative. In our work, we model the Indian judiciary using graphs due to its inherent inter-connected structure of cases and laws.
We also aim to analyse the effect of time and acts on the outcome of the cases. To ensure a verdict is fair, we check our model for bias to discover how fair the decisions are. Extending the task to link prediction, we check how well the model understands the relationship between two cases by predicting whether an edge exists between their corresponding graph nodes. In the paper, we answer the following four research questions:
**RQ1:** How does a real-world setting like graph neural networks perform on the task of LJP?
**RQ2:** How does the model behave when we prune or add time and act nodes?
**RQ3:** How does the model perform when trained temporally, i.e. trained till a particular period and how well can the model predict an edge between two given nodes?
**RQ4:** How fair are the decisions in the task of LJP?
Following is the summary of the research contributions done in this paper:
1. We employ a graph neural network (GNN) under different embedding settings by adding and removing two different node characteristics, i.e. time and acts.
2. A link prediction task to observe how well the model can predict an edge between two nodes representing a particular case citing another case.
3. A set of temporal experiments to see the effect of time on the training of the model.
4. For fairness, we check how biased the model is while making predictions.
## 2 Literature Review
There has been a great deal of research on the text in the legal domain and various tasks have been suggested, such as prior case retrieval [13], crime classification [23], and judgment prediction [28].
For the LJP challenge, various strategies and corpora have been introduced. In order to get around BERT's input token count restriction for the LJP problem, [2] presented a hierarchical variant of BERT [4]. Using datasets from the Chinese AI and Law Challenge (CAIL2018), [25] deployed a Multi-Perspective Bi-Feedback Network to forecast the corresponding legal accusations, offences, and periods of punishment. On three Chinese datasets (CJO, PKU, and CAIL), [27] used topological multi-task learning on a directed acyclic network to predict charges, including theft, traffic violation, and deliberate homicide.
To predict the charges on a dataset of Criminal Law of the People's Republic of China, [18] suggested an attention-based model given the case's facts and the relevant articles. Similarly, in a few-shot configuration, [11] implemented an attribute-attentive model based on the case's facts. Using a legal reading comprehension technique on a Chinese dataset, [17] predicted the case's outcome. Given the facts and charges on a dataset created from documents of the Supreme People's Court of China, [3] used a deep gating network to predict prison terms. [1] employed a linear support vector machine (SVM) to predict violations based on the facts of cases from the European Court of Human Rights. [22] implemented SVM in the LJP task on cases from the French Supreme Court. [15] proposed a random forest model to forecast the judges "Reverse", "Affirm", and "Other" judgments in the US Supreme Court.
In their research, [6] proposed a method for representing legal knowledge using logic rules in a co-attention network, which improves interpretability and logical reasoning. They demonstrate the effectiveness of their approach through comprehensive experiments conducted on a civil loan scenario. Similarly, [20] utilizes a real courtroom dataset to predict legal judgments. Using multi-task learning, they extensively analyze multi-role dialogues, including plaintiff's claims and court debate data, to understand facts and discriminate claims for final judgments. The works of [26] introduce NeurJudge, a framework for predicting legal judgments that consider crime circumstances. They leverage intermediate sub-task results to identify and utilize different circumstances for predicting other subtasks.
Another approach by [19] employs LSTM [9] to predict legal judgments by comprehensively understanding case inputs, court debates, and multi-role dialogues. They also utilize multi-task learning to discriminate claims and reach final judgments. [12] propose a unified text-to-text Transformer for LJP, where the auto-regressive decoder's dependencies among sub-tasks can be naturally established. They highlight the advantage of establishing dependencies among sub-tasks.
Furthermore, [24] uses a graph neural network to differentiate confusing charges. They leverage a novel attention mechanism to automatically learn subtle differences between law articles and extract effective discriminative features from fact descriptions. [5] employ a graph neural network (GNN) to address the LJP problem as a node classification task on a global consistency graph derived from the training set. They utilize a masked transformer network for case-aware node representations and leverage relational learning for local consistency through neighbours' label distribution. Variational expectation minimization optimizes both the node encoder and classifier.
[16] introduce MANN, a multichannel attentive neural network model for the integrated LJP task. MANN learns from previous judgment documents and utilizes attention-based neural networks to capture latent feature representations focused on case facts, defendant persona, and relevant law articles. A two-tier structure empowers attentive sequence encoders to hierarchically model semantic interactions at word and sentence levels in the case description.
In their work, [14] introduce the Hindi Legal Documents Corpus (HLDC) consisting of over 900K legal documents in Hindi. They also propose a Multi-Task Learning (MTL) based model incorporating summarization as an auxiliary task alongside the primary task of bail prediction. [21] present ILDC, a vast corpus containing 35k Indian Supreme Court cases annotated with original court decisions. They explore various baseline models for case predictions and propose a hierarchical occlusion-based model to enhance explainability. [7] suggest a moco-based supervised contrastive learning approach to acquire distinguishable representations and determine optimal positive example pairs for all three LJP subtasks. They also enhance fact description representation by incorporating pre-trained numeracy models to utilize crime amounts for predicting penalty terms.
## 3 Experiments
For our experiments, we use graph neural network architecture (GraphSAGE [8]) for node classification where nodes are cases, acts and time nodes. Text embeddings were used as node features and time & act nodes are referred to as characteristic nodes as they are hypothesised to have an impact on the graphical model's performance. Further, to analyse the impact of time on verdicts, we divide the dataset into train and test based on the year up to which we want to train the model. For our experiments, we use the ILDC dataset curated by [21], in which they provide preprocessed cases for the task with their corresponding binary labels. In each case brought before the Supreme Court of India (SCI), the judge or panel determines whether the assertions made by the appellant/petitioner against the respondent should be deemed as "accepted" or "rejected" and accordingly, labels are assigned in the dataset. The dataset already has split examples into train, test and development sets(5082/1517/994). We use the train and development splits as the train split and the test split as itself.
As we employed the graph neural network for our experiments, we had to increase the size of the dataset because the dataset [21] does not provide the cases that were cited by the cases. Using the ikanoon API1, we extracted 24,907 additional cases and added them as nodes to the graph network to complete the citation network. This achieved the semi-supervised setting, enabling message passing between nodes and learning from other cases.
Footnote 1: [https://api.indiankanoon.org/](https://api.indiankanoon.org/)
### Different embeddings
* **Random** We initialize random embeddings as node features to train the model.
* **XLNet** We initialize XLNet embeddings as node features to train the model.
* **XLNet Pretrained** We take the previously trained XLNet model on the task of judgment prediction from [21] and extract embeddings for the nodes to train the model.
* **hierarchical** For the train split in the dataset, we initialize embeddings from XLNet pre-trained on the judgment prediction, and for the test split, we initialize embeddings from XLNet (not pre-trained).
### Edge Types
Another set of experiments included the edge type between two nodes in the graph.
* **Directed:** Given a case, an edge is directed from it to all the cases it cites, enabling message-passing from that case node to the cited case node.
* **Rev-Directed:** Given a case, an edge is directed to it from all the cases it cites, enabling message-passing to the case node from the cited case node. It is the most practical use case as the message has to be passed between various nodes to make the network aware of the legal knowledge and decisions.
* **Undirected:** Given a case, an undirected edge is present between it and the cases it cites, enabling message-passing from that case node to the cited case node and vice versa.
### Time
The dataset contains cases from the year 1956 to the year 2021. To study the impact of time on these verdicts, we added new nodes, called time nodes, in the
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Year range** & \begin{tabular}{c} **Number of** \\ **Training** \\ **Examples** \\ \end{tabular} &
\begin{tabular}{c} **Cumulative** \\ **Frequency** \\ \end{tabular} \\ \hline
1956-1960 & 447 & 447 \\ \hline
1961-1965 & 955 & 1402 \\ \hline
1966-1970 & 714 & 2116 \\ \hline
1971-1975 & 728 & 2844 \\ \hline
1976-1980 & 622 & 3466 \\ \hline
1981-1985 & 515 & 3981 \\ \hline
1986-1990 & 547 & 4528 \\ \hline
1991-1995 & 366 & 4894 \\ \hline
1996-2000 & 24 & 4918 \\ \hline
2001-2005 & 10 & 4928 \\ \hline
2006-2010 & 5 & 4933 \\ \hline
2011-2015 & 1 & 4934 \\ \hline
2016-2020 & 1 & 4935 \\ \hline \end{tabular}
\end{table}
Table 1: Number of training examples distributed year-wise with their cumulative frequency.
graph with their connection to the corresponding case in that particular year. The node features of these time nodes are randomly initialised embeddings.
### Acts
Cases come with supporting arguments referencing acts from the Indian constitution to make an argument better. We experiment with the retention and removal of those act nodes in the graph model to observe how the model prediction changes based on underlying act nodes with their connection to the case.
### Simple Training
A graph network is trained on the dataset as specified by the train and test splits [21]. Experimentation involved training with and without characteristic nodes. We recorded the macro precision, recall and F1 scores.
### Temporal Training
To study the temporal aspect of how a verdict is made based on cases and acts present in that particular split and how the law has changed over time. The model is trained on cases for a particular number of years and tested on the rest.
#### 3.6.1 Training direction
* **Forward** We train the model for a particular number of years and then predict future case verdicts. For example, we train the model till 2001 and then predict the verdicts from 2002 to 2021 to check how verdicts present till a particular year affect future cases. We have a range from 1956 to 2021. We tend to predict cases in 5 years from the year of training till the range, i.e. if the model is trained till 1956, predictions are made on cases from 1957-1961, 1962-1966, 1967-1971 and so on till 2021.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Embeddings} & \multicolumn{3}{l|}{no change} & \multicolumn{3}{l|}{undirected} & \multicolumn{3}{l|}{rev edges} \\ \cline{2-13} & \multicolumn{1}{l|}{mn acts} & \multicolumn{1}{l|}{keep acts} & \multicolumn{1}{l|}{mn acts} & \multicolumn{1}{l|}{keep acts} & \multicolumn{1}{l|}{mn acts} & \multicolumn{1}{l|}{keep acts} & \multicolumn{1}{l|}{mn acts} & \multicolumn{1}{l|}{keep acts} \\ \cline{2-13} & yes time & no time & yes time & no time & yes time & no time & yes time & no time & yes time & no time & yes time & no time \\ \hline vanilla & 58.53 & 57.84 & 59.69 & 57.45 & 59.12 & 58.49 & 58.64 & 58.97 & 58.3 & 58.37 & 59.28 & 59.62 \\ \hline pretrained & **74.81** & **75.14** & **75.0** & **74.98** & **74.85** & **75.35** & **73.86** & **74.46** & **75.35** & **75.3** & **74.49** & **74.22** \\ \hline random & 53.80 & 52.75 & 54.63 & 53.21 & 54.58 & 52.83 & 55.72 & 53.52 & 52.10 & 53.24 & 53.88 & 53.05 \\ \hline hierar & 54.88 & 55.53 & 55.80 & 55.80 & 55.42 & 55.45 & 55.10 & 54.9 & 55.52 & 55.18 & 54.53 & 54.49 \\ \hline \end{tabular}
\end{table}
Table 2: Results from simple training. The table shows the different embeddings used with time and act settings. Pretrained embeddings were the best performing of all the embeddings used in the experiments. Acts and Time nodes were less significant in the model.
* **Reverse** We train the model for a particular number of years and then predict past case verdicts. For example, we train the model from 2001 till 2021 and then predict the verdicts from 1956 to 2000 to check how new laws can comprehend past cases and make good predictions. We tend to predict cases in 5 years from the year of training till the range, i.e. if the model is trained till 2001, predictions are made on years 1996-2000, 1991-1995, 1976-1990 and so on till 1956.
Figure 1: Judgment Prediction Task: Results of temporal training on different embeddings. The thickest line represents an undirected edge, the less thick line represents the rev_edges setting, and the thinnest line represents the no_change setting. The broken line represents that training is done in a reversed manner.
### Link Prediction
Link prediction is predicting if any edge exists between two nodes. We experimented with the task of link prediction based on the temporal aspect of the cases. We used the complete dataset of 24,907 cases from the early 1800s to 2021, divided nodes into train and test, and ran the above experiments to observe how well the link prediction works in the given settings. We plotted the ROC curve Figure 2 of the result.
### Redaction
We redacted gender-biased terms, in our case pronouns (he, him, his, her, she) and name tokens with _[gender]_ and _[REDACTED]_ tokens and fine-tuned the
Figure 2: Link Prediction Task: Results of temporal training on different embeddings. The thickest line represents an undirected edge, the less thick line represents the rev_edges setting, and the thinnest line represents the no_change setting. The broken line represents that training is done in a reversed manner.
XLNet and graph models again on the above-given settings for link prediction. We used Indian legal NER5 to extract name entities.
Footnote 5: [https://huggingface.co/opennyaiorg/en_legal_ner_trf](https://huggingface.co/opennyaiorg/en_legal_ner_trf)
## 4 Results and Discussion
### Simple training
In Table 2, we can observe that the difference in the observations is mainly due to the different features i.e. the text embeddings used. Pretrained embeddings were the top features for the model, followed by vanilla XLNet embeddings,
Figure 3: ROC curves for link prediction task using pre-trained embeddings. Line thickness increases from the earliest to the latest year in the dataset.
hierarchical and random. The type of edges between nodes has minimal effect. Moreover, adding and removing the time nodes and acts did not significantly change the model's performance. We can only observe a minute change in the model's performance when we add time nodes.
### Temporal Training
The trend for all the graphs except for the model with pre-trained embeddings is the same. Figure 1 shows that in the forward training direction, as we keep training the graph on consecutive years, the F1 score of the model keeps increasing as the data increases, resulting in a non-negative slope from a negative slope. Training in the reverse direction gives us a negative slope over the years, indicating that trained on future predictions, the model performs poorly on past verdicts when only the future data is used to train the model. The model with pre-trained features gets roughly the same F1 score for both classes on the test data; see Figure 4. Comparison is also made based on the intersection of test samples in the simple and the created temporal datasets. However, when we only take the test samples from the temporal dataset, we can see that the model has an F1 score in the range of around 80%-85% across all years, and we can see a drop after 2011. This observation also includes samples used to pretrain XLNet to get text embeddings. To confirm that the model is not learning irrelevant patterns, as some instances are used to train XLNet, as mentioned, and that the F1 score is not based on pre-trained embeddings, we shuffled the labels by 50% [10] and trained the model. In every case, we got an F1 score lower than 50%, confirming that the close F1 scores by the model in temporal and simple settings are not random, and the model is learning the patterns. For the vanilla embeddings, we can see in Figure 5 that it performs better than simple training in some scenarios, like when time nodes are not added to the model. After 1981, we can see the kink changing its direction in the graphs from and after 2001-2006; this is observed in the models with features except for pre-trained embeddings.
### Link Prediction
For the link prediction task, in forward training, we can see in Figure 2 when the edges are undirected, the model can predict it more efficiently, followed by rev_edges and no_change. The embeddings have significantly less effect in the forward training direction.
Figure 3 shows ROC curves broken down to years for each pre-trained embedding and each setting. More are present in the appendix. The area under the curve (AUC) for pre-trained is the most, followed by hierarchical, vanilla and random.
The addition of time and act nodes decreases the AUC of the curve, which means the model is not very efficient in predicting an edge between a case and time and\(\backslash\)or act nodes.
Reverse training also exhibited the same observations except for the part where the AUC was lower than forward training, specifically for the reverse training setting.
Figure 4: F1 scores of binary labels of the GNN model with pre-trained features calculated on different numbers of samples distributed according to the year as present in Table 1. The dotted line represents label 0 and s_0 represents the performance of label 0 training in the simple setting and continuous line represents label 1 and s_1 represents the performance of label 1 while training in simple setting. Line t_0 represents the performance of label 0 while training temporally. Line t_1 represents the performance of label 1 while training temporally. The X-axis represents the number of training samples presented according to Table 1 and the Y-axis represents the F1 score.
### Redaction
Training our model with redacted tokens gives almost the same output for all the models compared to the unredacted dataset, hinting at little to no bias in the judgments.
## 5 Conclusion
By this study, we conclude that the time nodes have a negative impact on the performance of the model, as we can observe in Figure 4, 5, 6 and 7 where the difference F1 score of both the classes is more significant with the time nodes. The opposite is observed in the case of acts; when we have the act nodes, the difference between F1 scores of both the classes reduces, which implies the best setting is with no time nodes and keeping act nodes in the model. Secondly, embeddings have the least significant impact on the task of link prediction, and it mainly predicts the presence of an edge between two nodes. Predicting the edge's direction is challenging for the model as undirected edges are predicted better. Lastly, we found no significant change in the model's performance and classes with the redacted tokens.
## 6 Limitations
As per the limitations, the explainability of judgments was out of the scope of the paper. We limit ourselves to existing graph models for our study. We mainly focus on a more profound analysis of the LJP task using graph neural networks, so we are on par with the best-performing model proposed in [21].
## 7 Ethics
Our study aims to advance research and automation in the legal domain, focusing on the Indian legal system. We are committed to making the extended dataset and resources we use publicly available, ensuring accessibility for all. Given the substantial number of pending cases in lower courts, our efforts are directed towards enhancing the legal system, which stands to benefit millions of people. Our work aligns with previous efforts in legal NLP, such as creating legal corpora and predicting legal judgments.
Nevertheless, we acknowledge the potential risks associated with developing AI systems based on legal corpora, which could adversely affect individuals and society. To address this concern, we took proactive measures to identify and mitigate biases in the corpus. One of these measures includes anonymizing entities, such as names and gender, in the dataset. This is particularly important as previous research [21] and [14] have shown biases in legal datasets.
Additionally, we explored the task of Link Prediction in the Indian legal domain, which holds significance for developing recommendation systems in the
legal field. This area is relatively novel in NLP research and remains nascent in India. Consequently, further research and investigations are essential, especially concerning potential biases and societal impacts.
## 8 Acknowledgements
We would like to acknowledge iHub Anubhuti IIIT Delhi for funding our research which enabled us to get the required resources.
|
2303.14391 | Multi-pooling 3D Convolutional Neural Network for fMRI Classification of
Visual Brain States | Neural decoding of visual object classification via functional magnetic
resonance imaging (fMRI) data is challenging and is vital to understand
underlying brain mechanisms. This paper proposed a multi-pooling 3D
convolutional neural network (MP3DCNN) to improve fMRI classification accuracy.
MP3DCNN is mainly composed of a three-layer 3DCNN, where the first and second
layers of 3D convolutions each have a branch of pooling connection. The results
showed that this model can improve the classification accuracy for categorical
(face vs. object), face sub-categorical (male face vs. female face), and object
sub-categorical (natural object vs. artificial object) classifications from
1.684% to 14.918% over the previous study in decoding brain mechanisms. | Zhen Zhang, Masaki Takeda, Makoto Iwata | 2023-03-25T07:54:51Z | http://arxiv.org/abs/2303.14391v1 | # Multi-pooling 3D Convolutional Neural Network
###### Abstract
Neural decoding of visual object classification via functional magnetic resonance imaging (fMRI) data is challenging and is vital to understand underlying brain mechanisms. This paper proposed a multi-pooling 3D convolutional neural network (MP3DCNN) to improve fMRI classification accuracy. MP3DCNN is mainly composed of a three-layer 3DCNN, where the first and second layers of 3D convolutions each have a branch of pooling connection. The results showed that this model can improve the classification accuracy for categorical (face vs. object), face sub-categorical (male face vs. female face), and object sub-categorical (natural object vs. artificial object) classifications from 1.684% to 14.918% over the previous study [1] in decoding brain mechanisms.
fMRI classification, visual brain states, multi-pooling 3D convolutional neural network (MP3DCNN)
## I Introduction
fMRI is a non-invasive and reliable technique that measures the small changes in blood flow caused by brain activities. fMRI data have relatively high spatial resolution and availability, thus providing a solution to reveal neural activities of brain regions under visual stimuli. Recent years witnessed the applications of convolutional neural networks (CNNs) in brain decoding, such as [1][2][3][4].
The previous study [1] performed categorical (face vs. object), face sub-categorical (male face vs. female face), and object sub-categorical (natural object vs. artificial object) classifications via a classic three-layer 3DCNN, revealing that the human visual system recognizes objects following the principle of going from categories into sub-categories.
However, the classification model used in [1] does not significantly present a high accuracy even with 9-fold fMRI data averaging, especially for sub-categorical classification tasks. Therefore, a novel multi-pooling 3D convolutional neural network (MP3DCNN) is proposed in this paper, which is expected to reach a higher accuracy than the previous one [1] and play a valuable role in decoding brain mechanisms.
## II Materials
### _fMRI data collection and preprocessing_
In our study, the fMRI dataset is provided by [1], where the subject clicked the button corresponding to a random visual stimulus (an image of a male face, female face, natural object, or artificial object) within 0.5s. During this period, a Siemens 3T MRI scanner recorded the subject's brain states as T1-weighted 3D fMRI volumes. Through SPM12 [5], the fMRI volumes were realigned, co-registered, normalized to the standard Montreal Neurological Institute (MNI) template, and resampled to 2-mm isotropic voxels.
### _Available fMRI data_
There are 17306 fMRI volumes of size (\(79\times 95\times 79\)) from 50 subjects available, including 4453, 4399, 4214, and 4240 volumes corresponding to the visual stimuli of male face, female face, natural object, and artificial object, respectively. To suppress the background noise and the irrelevant neural activities, the fMRI dataset of each subject was multi-fold averaged as an option to improve the data quality (for example 9-fold fMRI data averaging).
## III Methods
The proposed MP3DCNN includes feature extraction, feature combination, and classifier, as shown in Fig.1.
### _Feature extraction_
The feature extraction has a mainchain and two branches. Its mainchain is a three-layer 3D CNN, and each convolution combines with a batch normalization, an average 3D pooling layer, and a rectified linear unit (ReLU). The first and second 3D convolutions each have a branch connection, where an average 3D pooling layer and a linear layer are used to generalize further and connect the extracted features. The relevant hyperparameters are as follows:
**3D Convolution.** An 3D convolution is a 3D tensor with the hyperparameter of (\(c_{lm}\), \(c_{out}\), \(k_{conv}\)), where \(c_{lm}\) depends on the number of the input feature maps (default 1 to the input fMRI volume), \(c_{out}\) determines the number of the convolutional filter, and \(k_{conv}\) is the kernel size of the convolutional calculation. Batch normalization is applied on \(c\) dimension, where \(c\) equals to the \(c_{out}\) of the former-layer convolution. The hyperparameters of the three convolutions in the feature extraction are (1,4,7), (4,4,5) and (4,4,3), respectively.
Fig.1: Multi-pooling 3D convolutional neural network (MP3DCNN).
**Average 3D pooling.** Suppose the \(i_{th}\) 3D convolution outputs a 3D tensor with the size of \(c_{l}\times h_{l}\times w_{l}\times d_{l}\), where \(c_{l}\) is the number of the feature maps, \(h_{l}\), \(w_{l}\), and \(d_{l}\) indicate the height, width and depth of each feature map. An average 3D pooling with a hyperparameter of \(k_{pool}\) can perform average pooling on each feature map using a kernel size of \(k_{pool}\), which is set to 2 in the mainchain. In the branch-1 and -2, \(k_{pool}\) is set to 3 and 2.
**Linear layer.** Linear layer with the hyperparameters of (\(f_{in},f_{out}\)) can convert 1D tensors of arbitrary size. The output of the mainchain is flattened into a 1D tensor with the size of \(1\times 1764\). In the branch-1 and 2, the hyperparameters of the linear layer are (\(8064,3528\)) and (\(2569,1764\)), which aim to output the same number and the double number of features as the mainchain.
### _Feature combination and Classifier_
In the feature extraction unit, the branch-1, branch-2, and mainchain output the first, the second, and the third-level feature representations of fMRI data, respectively. Through feature combination, the model can provide a sophisticated decision using multi-level features in fMRI classification. Moreover, the branch connections added extra paths for the first and the second convolutions in the backpropagation, aiming to improve the model's learning efficiency.
The classifier is composed of two linear layers with the hyperparameters of (5292,128) and (128,1). The dropout can prevent overfitting in the training processing, and the probability is set to 0.05. Sigmoid activation is used for binary classification tasks.
## IV Experiments And Results
The proposed model was implemented by Pytorch [6] on a Linux platform with an NVIDIA GeForce RTX 3060 GPU. We trained and tested the proposed and previous models using unaveraged and 9-fold averaged fMRI volumes. To evaluate the model performance independently, the fMRI volumes from 5 of 50 subjects were detached as the test dataset. According to the study [1], we equalized the test dataset and retained 1468 (734 faces vs. 734 objects), 772 (386 male faces vs. 386 female faces), and 718 (359 natural objects vs. 359 artificial objects) of fMRI volumes for categorical, face sub-categorical, and object sub-categorical classifications. The fMRI volumes of the 45 leftover subjects were used as the training dataset.
In the model training process, we performed 9-fold cross-validation with 25 training iterations, where the batch size was 64, the learning rate was 0.00001, and the loss function was binary cross entropy (BCE) [7]. Within the 25 iterations, the model parameters with the highest accuracy score on each validation dataset were used for model testing. Finally, we used a majority voting scheme to ensemble the nine results in determining the classification accuracy.
Table I presents the comparison of the voting accuracy between the proposed and the previous models [1] for the categorical, face sub-categorical, and object sub-categorical classifications. We can see that the proposed model improved the voting accuracy by 14.918%, 3.368%, and 2.646% when using unaveraged fMRI volumes, and 4.088%, 1.684%, and 13.649 % when using 9-fold averaged fMRI volumes over the original model in the three classifications.
## V Conclusion And Discussion
A novel fMRI classification model called MP3DCNN is proposed in our study. Its mainchain is a three-layer 3DCNN, where the first and the second layers of convolution each have a branch connection. In the mainchain and branches, the extracted fMRI featured are multiple pooled. The results show that the proposed model reached higher classification accuracies than the previous study [1] for the categorical, face sub-categorical, and object sub-categorical classifications.
The main reasons are speculated as follows: 1) we considered that the multiple 3D average pooling layers can against the local feature redundancy in the feature extraction process and pass the global information to the classifier as much as possible; 2) through the branch connections, the model decision can depend on the merged features of the three 3D convolutions, thereby improving the model's robustness.
In future research, we look forward to using grid-search [8] to optimize the model hyperparameters and exploring the visual explanations based on the reached classification results.
## Acknowledgments
The authors would like to express their sincere gratitude for the invaluable help from Mr. Kosuke Miyoshi, and Dr. Shinichi Yoshida.
|
2301.10956 | Graph Neural Networks can Recover the Hidden Features Solely from the
Graph Structure | Graph Neural Networks (GNNs) are popular models for graph learning problems.
GNNs show strong empirical performance in many practical tasks. However, the
theoretical properties have not been completely elucidated. In this paper, we
investigate whether GNNs can exploit the graph structure from the perspective
of the expressive power of GNNs. In our analysis, we consider graph generation
processes that are controlled by hidden (or latent) node features, which
contain all information about the graph structure. A typical example of this
framework is kNN graphs constructed from the hidden features. In our main
results, we show that GNNs can recover the hidden node features from the input
graph alone, even when all node features, including the hidden features
themselves and any indirect hints, are unavailable. GNNs can further use the
recovered node features for downstream tasks. These results show that GNNs can
fully exploit the graph structure by themselves, and in effect, GNNs can use
both the hidden and explicit node features for downstream tasks. In the
experiments, we confirm the validity of our results by showing that GNNs can
accurately recover the hidden features using a GNN architecture built based on
our theoretical analysis. | Ryoma Sato | 2023-01-26T06:28:41Z | http://arxiv.org/abs/2301.10956v4 | # Graph Neural Networks can Recover the Hidden Features
###### Abstract
Graph Neural Networks (GNNs) are popular models for graph learning problems. GNNs show strong empirical performance in many practical tasks. However, the theoretical properties have not been completely elucidated. In this paper, we investigate whether GNNs can exploit the graph structure from the perspective of the expressive power of GNNs. In our analysis, we consider graph generation processes that are controlled by hidden node features, which contain all information about the graph structure. A typical example of this framework is kNN graphs constructed from the hidden features. In our main results, we show that GNNs can recover the hidden node features from the input graph alone, even when all node features, including the hidden features themselves and any indirect hints, are unavailable. GNNs can further use the recovered node features for downstream tasks. These results show that GNNs can fully exploit the graph structure by themselves, and in effect, GNNs can use both the hidden and explicit node features for downstream tasks. In the experiments, we confirm the validity of our results by showing that GNNs can accurately recover the hidden features using a GNN architecture built based on our theoretical analysis.
Graph Neural Networks, Graph Neural Networks, Graph Structure
## 1 Introduction
Graph Neural Networks (GNNs) (Gori et al., 2005; Scarselli et al., 2009) are popular machine learning models for processing graph data. GNNs take a graph with node features as input and output embeddings of nodes. At each node, GNNs send the node features to neighboring nodes, aggregate the received features, and output the new node features (Gilmer et al., 2017). In this way, GNNs produce valuable node embeddings that take neighboring nodes into account. GNNs show strong empirical performance in machine learning and data mining tasks (Zhang and Chen, 2018; Fan et al., 2019; He et al., 2020; Han et al., 2022).
Roughly speaking, GNNs smooth out the node features on the input graph by recursively mixing the node features of neighboring nodes, and GNNs thereby transform noisy features into clean ones. This smoothing effect has been observed empirically (Chen et al., 2020; NT and Maehara, 2019) and shown theoretically (Li et al., 2018; Oono and Suzuki, 2020). There are several GNN architectures that are inspired by the smoothing process (Klicpera et al., 2019; NT and Maehara, 2019; Wu et al., 2019). It has also been pointed out that stacking too many layers harms the performance of GNNs due to the over-smoothing effect (Li et al., 2018; Chen et al., 2020), which is caused by too much mixing of the node features.
In this perspective, node features are the primary actors in GNNs, and graphs are secondary. If node features are uninformative at all, GNNs should fail to obtain meaningful node embeddings no matter how they mix node features. This is in contrast to the opposite scenario: Even if the graphs are uninformative at all, if the node features are informative for downstream tasks, GNNs can obtain meaningful node embeddings just by ignoring edges or not mixing node features at all. Therefore, node features are the first requirement of GNNs, and the graph only provides some boost to the quality of node features (NT and Maehara, 2019). It indicates that GNNs cannot utilize the graph information without the aid of good node features.
The central research question of this paper is as follows:
_Can GNNs utilize the graph information_
_without the aid of node features?_
We positively answer this question through our theoretical analysis. We show that GNNs can recover the hidden node features that control the generation of the graph structure even without the help of informative node features. The recovered features contain all the information of the graph structure. The recovered node features can be further used for downstream tasks. These results show that GNNs can essentially use both given node features and graph-based features extracted from the graph structure. Our theoretical results provide a different perspective from the existing
beliefs (Wang et al., 2020; NT & Maehara, 2019) based on empirical observations that GNNs only mix and smooth out node features. In the experiments, we show that existing GNN architectures do not necessarily extract the hidden node features well, and special architectures are required to learn the recovery in empirical situations.
The contributions of this paper are summarized as follows:
* We establish the theory of the feature recovery problem by GNNs for the first time. Our analysis provides a new perspective on the expressive power of GNNs.
* We prove that GNNs can recover the hidden features solely from the graph structure (Theorem 4.4). These results show that GNNs have an inherent ability to extract information from the input graph.
* We validate the theoretical results in the experiments by showing that GNNs can accurately recover the hidden features. We also show that existing GNN architectures are mediocre in this task. These results highlight the importance of inductive biases for GNNs.
## 2 Related Work
### Graph Neural Networks and Its Theory
Graph Neural Networks (GNNs) (Gori et al., 2005; Scarselli et al., 2009) are now de facto standard models for graph learning problems (Kipf & Welling, 2017; Velickovic et al., 2018; Zhang & Chen, 2018). There are many applications of GNNs, including bioinformatics (Li et al., 2021), physics (Cranmer et al., 2020; Pfaff et al., 2021), recommender systems (Fan et al., 2019; He et al., 2020), and transportation (Wang et al., 2020). There are several formulations of GNNs, including spectral (Defferrard et al., 2016), spatial (Gilmer et al., 2017), and equivariant (Maron et al., 2019) ones. We use the message-passing formulation introduced by Gilmer et al. (2017) in this paper.
The theory of GNNs has been studied extensively in the literature, including generalization GNNs (Scarselli et al., 2018; Garg et al., 2020; Xu et al., 2020) and computational complexity (Hamilton et al., 2017; Chen et al., 2018; Zou et al., 2019; Sato et al., 2022). The most relevant topic to this paper is the expressive power of GNNs, which we will review in the following.
**Expressive Power** (or Representation Power) means what kind of functional classes GNNs can realize. Originally, Morris et al. (2019) and Xu et al. (2019) showed that message-passing GNNs are at most as powerful as the 1-WL test, and they proposed GNNs that are as powerful as the 1-WL and \(k\)-(set)WL tests. Sato et al. (2019, 2021) and Loukas (2020) also showed that message-passing GNNs are as powerful as a computational model of distributed local algorithms, and they proposed GNNs that are as powerful as port-numbering and randomized local algorithms. Loukas (2020) showed that GNNs are Turing-complete under certain conditions (i.e., with unique node ids and infinitely increasing depths). There are various efforts to improve the expressive power of GNNs by non-message-passing architectures (Maron et al., 2019; Ma et al., 2019; Murphy et al., 2019). We refer the readers to survey papers (Sato, 2020; Jegelka, 2022) for more details on the expressive power of GNNs.
The main difference between our analysis and existing ones is that the existing analyses focus on combinatorial characteristics of the expressive power of GNNs, e.g., the WL test, which are not necessarily aligned with the interests of realistic machine learning applications. By contrast, we consider the continuous task of recovering the hidden features from the input graph, which is an important topic in machine learning in its own right (Tenenbaum et al., 2000; Belkin & Niyogi, 2003; Sussman et al., 2014; Sato, 2022). To the best of our knowledge, this is the first paper that reveals the expressive power of GNNs in the context of feature recovery. Furthermore, the existing analysis of expressive power does not take into account the complexity of the models. The existing analyses show that GNNs can solve certain problems, but they may be too complex to be learned by GNNs. By contrast, we show that the feature recovery problem can be solved with low complexity.
### Feature Recovery
Estimation of hidden variables that control the generation process of observed data has been extensively studied in the machine learning literature (Tenenbaum et al., 2000; Sussman et al., 2014; Kingma & Welling, 2014). These methods are sometimes used for dimensionality reduction, and the estimated features are fed to downstream models. In this paper, we consider the estimation of hidden embeddings from a graph observation (Alamgir & von Luxburg, 2012; von Luxburg & Alamgir, 2013; Terada & von Luxburg, 2014; Hashimoto et al., 2015). The critical difference between our analysis and the existing ones is that we investigate whether GNNs can represent a recovery algorithm, while the existing works propose general (non-GNN) algorithms that recover features. To the best of our knowledge, we are the first to establish the theory of feature recovery based on GNNs.
Many empirical works propose feature learning methods for GNNs (Hamilton et al., 2017; Velickovic et al., 2019; You et al., 2020; Hu et al., 2020; Qiu et al., 2020). The differences between these papers and ours are twofold. First, these methods are not proven to converge to the true features, while we consider a feature learning method that converges to the true features. Second, the existing methods rely heavily on the input node features while we do not assume any input node features. The latter point is important
because how GNNs exploit the input graph structure is a central topic in the GNN literature, and sometimes GNNs are shown to NOT benefit from the graph structure (Errica et al., 2020). By contrast, our results show that GNNs can extract meaningful information from the input graph from a different perspective than existing work.
## 3 Background and Problem Formulation
In this paper, we assume that each node \(v\) has hidden features \(\mathbf{z}_{v}\), and the graph is generated by connecting nodes with similar hidden features. For example, (i) \(\mathbf{z}_{v}\in\mathbb{R}^{d}\) represents the preference of person \(v\) in social networks, (ii) \(\mathbf{z}_{v}\) represents the topic of paper \(v\) in citation networks, and (iii) \(\mathbf{z}_{v}\) represents the geographic location of point \(v\) in spatial networks. The critical assumption of our problem setting is that the features \(\{\mathbf{z}_{v}\mid v\in V\}\), such as the true preference of people and the true topic of papers, are not observed, but only the resulting graph \(G\) is observed. Somewhat surprisingly, we will show that GNNs that take the vanilla graph \(G\) with only simple synthetic node features such as degree features \(d_{v}\) and graph size \(n=|V|\) can consistently estimate the hidden features \(\{\mathbf{z}_{v}\mid v\in V\}\) (Fig. 1). In the following, we describe the assumptions on data and models in detail.
### Assumptions
In this paper, we deal with directed graphs. Directed graphs are general, and undirected graphs can be converted to directed graphs by duplicating every edge in both directions. We assume that there is an arc from \(u\) to \(v\) if and only if \(\|\mathbf{z}_{v}-\mathbf{z}_{u}\|<s(\mathbf{z}_{v})\) for a threshold function \(s\colon\mathbb{R}^{d}\to\mathbb{R}\), i.e., nodes with similar hidden features are connected. It is also assumed that the hidden features \(\{\mathbf{z}_{v}\}\) are sampled from an unknown distribution \(p(\mathbf{z})\) in an i.i.d. manner. As we consider the consistency of estimators or the behavior of estimators in a limit of infinite samples (nodes), we assume that a node \(v_{i}\) and its features \(\mathbf{z}_{v_{i}}\sim p(\mathbf{z})\) are generated one by one, and we consider a series of graphs \(G_{1}=(V_{1}=\{v_{1}\},E_{1}),G_{2}=(V_{2}=\{v_{1},v_{2}\},E_{2}),\ldots,G_{n}= (V_{n},E_{n}),\ldots\) with an increasing number of nodes. Formally, the data generation process and the assumptions are summarized as follows.
**Assumption 1** (Domain): The domain \(\mathcal{Z}\) of the hidden features is a convex compact domain in \(\mathbb{R}^{d}\) with smooth boundary \(\partial\mathcal{Z}\).
**Assumption 2** (Graph Generation): For each \(i\in\mathbb{Z}_{+}\), \(\mathbf{z}_{v_{i}}\) is sampled from \(p(\mathbf{v})\) in an i.i.d. manner. There is a directed edge from \(v\) to \(u\) in \(G_{n}\) if and only if \(\|\mathbf{z}_{v}-\mathbf{z}_{u}\|<s_{n}(\mathbf{z}_{v})\).
**Assumption 3** (Density): The density \(p(\mathbf{z})\) is positive and differentiable with bounded \(\nabla\log(p(\mathbf{z}))\) on \(\mathcal{Z}\).
**Assumption 4** (Threshold Function): There exists a deterministic continuous function \(\bar{s}(x)>0\) on \(\bar{\mathcal{Z}}\) such that \(g_{n}^{-1}s_{n}(x)\) converges uniformly to \(\bar{s}(x)\) for some \(g_{n}\in\mathbb{R}\) with \(g_{n}\xrightarrow{n\to\infty}0\) and \(g_{n}n^{\frac{1}{n+1}}\log^{-\frac{1}{d+2}}n\xrightarrow{n\to\infty}\infty\) almost surely.
**Assumption 5** (Stationary Distribution): \(n\pi_{G_{n}}(v)\) is uniformly equicontinuous almost surely, where \(\pi_{G_{n}}(v)\) is the stationary distribution of random walks on \(G_{n}\).
Note that these assumptions are common to (Hashimoto et al., 2015). It should be noted that the threshold functions \(s_{n}\) can be stochastic and/or dependent on the data as long as Assumption 4 holds. For example, \(k\)-NN graphs can be realized in this framework by setting \(s(\mathbf{z}_{v})\) to be the distance to the \(k\)-th nearest neighbor from \(\mathbf{z}_{v}\).
**Remark (One by One Generation).** The assumption of adding nodes one at a time may seem tricky. New users
Figure 1: **Illustrations of the Problem Setting. (a) Nodes have hidden features from which the input graph is generated. (b) The input to GNNs is a vanilla graph without any additional features. The coordinates of nodes are computed by the spring layout for visualization in this panel, but these coordinates are neither fed to GNNs. (c) GNNs try to recover the hidden features.**
are indeed inserted into social networks one by one in some scenarios, but some other graphs do not necessarily follow this process. This assumption is introduced for technical convenience to consider the limit of \(n\rightarrow\infty\) and to prove the consistency. In practice, the generation process of datasets does not need to follow this assumption. We use a single fixed graph in the experiments. GNNs succeed in recovering the hidden features only if the graph is sufficiently large.
### Graph Neural Networks
We consider message-passing GNNs (Gilmer et al., 2017) in this paper. Formally, \(L\)-layer GNNs can be formulated as follows. Let \(\mathbf{X}=[\mathbf{x}_{1},\ldots,\mathbf{x}_{n}]^{\top}\in\mathbb{R}^{n\times\text{d}_{ \text{an}}}\) be the explicit (i.e., given, observed) node features, and
\[\mathbf{h}_{v}^{(0)} =\mathbf{x}_{v} (\forall v\in V),\] \[\mathbf{a}_{v}^{(l)} =f_{\text{agg}}^{(l)}(\llbracket\mathbf{h}_{u}^{(l-1)}\mid u\in \mathcal{N}^{-}(v)\rrbracket) (\forall l\in[L],v\in V),\] \[\mathbf{h}_{v}^{(l)} =f_{\text{upd}}^{(l)}(\mathbf{h}_{v}^{(l-1)},\mathbf{a}_{v}^{(l)}) (\forall l\in[L],v\in V),\]
where \(\llbracket\cdot\rrbracket\) denotes a multiset, and \(\mathcal{N}^{-}(v)\) is the set of the neighbors with outgoing edges to node \(v\). We call \(f_{\text{agg}}^{(l)}\) an aggregation function and \(f_{\text{upd}}^{(l)}\) a update function. Let \(\theta=[L,f_{\text{agg}}^{(1)},f_{\text{upd}}^{(1)},\ldots,f_{\text{agg}}^{ (L)},f_{\text{upd}}^{(L)}]\) denote a list of all aggregation and update functions, i.e., \(\theta\) specifies a model. Let \(f(v,G,\mathbf{X};\theta)=\mathbf{h}_{v}^{(L)}\) be the output of the GNN \(\theta\) for node \(v\) and input graph \(G\). For notational convenience, \(L_{\theta}\), \(f_{\text{agg},\theta}^{(l)}\), and \(f_{\text{upd},\theta}^{(l)}\) denote the number of layers, \(l\)-th aggregation function, and \(l\)-th aggregation function of model \(\theta\), respectively.
Typical applications of GNNs assume that each node has rich explicit features \(\mathbf{x}_{v}\). However, this is not the case in many applications, and only the graph structure \(G\) is available. In such a case, synthetic features that can be computed solely from the input graph, such as degree features and the number of nodes, are used as explicit node features \(\mathbf{x}_{v}\)(Errica et al., 2020; Hamilton et al., 2017; Xu et al., 2019). In this paper, we tackle this general and challenging setting to show how GNNs exploit the graph structure. Specifically, we do not assume any external node features but set
\[\mathbf{x}_{v}=[d_{v},n]^{\top}\in\mathbb{R}^{2}, \tag{1}\]
where \(d_{v}\) is the degree of node \(v\), and \(n=|V|\) is the number of nodes in \(G\). The goal of this paper is that GNNs can recover the hidden features \(\mathbf{z}_{v}\) even if the node features are as scarce as the simple synthetic features. In words, we show that there exists GNN \(\theta\) that uses the explicit node features \(\mathbf{X}\) defined by Eq. (1)1 and outputs \(f(v,G,\mathbf{X};\theta)\approx\mathbf{z}_{v}\). This result is surprising because GNNs have been considered to simply smooth out the input features along the input graph (Li et al., 2018; NT & Maheara, 2019). Our results show that GNNs can imagine new features \(\mathbf{z}_{v}\) that are not included in the explicit features \(\mathbf{X}\) from scratch.
Footnote 1: Precisely, we will add additional random features as Eq. (3).
**Remark (Expressive Power and Optimization).** We note that the goal of this paper is to show the expressive power of GNNs, i.e., the existence of the parameters \(\theta\) or the model specification that realizes some function, and how to find them from the data, i.e., optimization, is out of the scope of this paper. The separation of the studies of expressive power and optimization is a convention in the literature (Sato et al., 2019; Loukas, 2020; Abboud et al., 2021). This paper is in line with them. In the experiments, we briefly show the empirical results of optimization.
### Why Is Recovery Challenging?
A straightforward approach to estimating the hidden features \(\mathbf{z}_{v}\) is to estimate the distance matrix \(D_{ij}=\|\mathbf{z}_{i}-\mathbf{z}_{j}\|\) first and recover the features by, e.g., multidimensional scaling. As GNNs can simulate the Bellman-Ford algorithm (Xu et al., 2020), the estimation of the distance matrix \(D\) looks easy at first glance. However, this is not the case as the edge lengths are not included in the input, and therefore the shortest path distance on the graph is not a consistent estimator for the distance of nodes in the feature space. Figure 2 illustrates this fact. One hop on the graph in a sparse region is longer in the feature space than that in a dense region. A hop count does not necessarily reflect the distance in the feature space. One might think that the density around a node could be estimated e.g., by the degree of the node, but this is not the case. Indeed, as the graph shown in Figure 2 is a \(k\)-NN graph, the degrees of all nodes are the same, and therefore the degree does not provide any information about the density. In general, the density cannot be estimated from a local structure, as von Luxburg
Figure 2: **Illustrations of the Difficulty of Recovery.** The input graph is \(10\)-NN graph of the hidden features. The shortest path distance between points A and B is \(21\) hops, and the shortest path distance between points A and C is \(18\) hops. These distances indicate that point C is closer to point A than point B, but this is not the case in the true feature space. This disagreement is caused by the different scales of edges in sparse and dense regions. The difficulty lies in the fact that these scales are not directly available in the input information.
& Alamgir (2013) noted that "It is impossible to estimate the density in an unweighted \(k\)-NN graph by local quantities alone."
If the edge lengths in the feature space are taken into account, the shortest-path distance on the graph is a consistent estimator for the distance of nodes in the feature space. However, the problem is that the edge length, such as the quantitative intimacy between people in social networks and the distance between the true topics of two papers in citation networks, is not available, and what we observe is only the vanilla graph \(G\) in many applications. The first insight to this challenge is that the threshold function \(s(\mathbf{z})\), which represents a typical scale of edges around \(\mathbf{z}\) can be used as a surrogate value for the edge length. In the following, we first focus on estimating the threshold function \(s(\mathbf{z})\) by GNNs, and then, we show that GNNs can recover the hidden features \(\mathbf{z}_{v}\) by leveraging the threshold function.
## 4 Main Results
We present our main results and their proofs in this section. At a high level, our results are summarized as follows:
* We show in Section 4.1 that GNNs can estimate the threshold function \(s(\mathbf{z})\) with the aid of the metric recovery theory of unweighted graphs (Hashimoto et al., 2015; Alamgir & von Luxburg, 2012).
* We show in Section 4.2 that GNNs can recover the hidden features up to rigid transformation with the aid of the theory of multidimensional scaling (Sibson, 1979; 1978) and random node features (Sato et al., 2021; Abboud et al., 2021).
* We show in Theorems 4.2 and 4.5 that the number of the functions to be learned is finite regardless of the number of nodes, which is important for learning and generalization (Xu et al., 2020).
### Graph Neural Networks can Recover the Threshold Function
First, we show that GNNs can consistently estimate the threshold function \(s\). As we mentioned in the previous section, the density and threshold function cannot be estimated solely from the local structure. As we will show (and as is known in the classical context), they can be estimated by a PageRank-like global quantity of the input graph.
**Theorem 4.1**.: _For any \(s\) and \(g\) that satisfy Assumptions 1-5, there exist \(\theta_{1},\theta_{2},\ldots\) such that with the explicit node features \(\mathbf{X}\) defined by Eq. (1),_
\[\text{Pr}\left[f(v,G_{n},\mathbf{X};\theta_{n})\xrightarrow{n\to\infty}s(\mathbf{z}_ {v})\right]=1,\]
_where the probability is with respect to the draw of samples \(\mathbf{z}_{1},\mathbf{z}_{2},\ldots\)._
Proof Sketch.: We prove this theorem by construction. The key idea is that GNNs can simulate random walks on graphs (Dehmamy et al., 2019). Once the stationary distribution of random walks is estimated, we can recover the scale from it (Hashimoto et al., 2015). The full proof can be found in Appendix A.
This theorem states that GNNs can represent a consistent estimator of \(s(\mathbf{z}_{v})\).
However, this theorem does not bound the number of layers, and the number of layers may grow infinitely as the number of nodes increases. This means that if the size of the graphs is not bounded, the number of functions to be learned grows infinitely. This is undesirable for learning. The following theorem resolves this issue.
**Theorem 4.2**.: _There exist \(h^{(1)}_{\text{agg}},h^{(1)}_{\text{upd}},h^{(2)}_{\text{agg}},h^{(2)}_{ \text{upd}}\) such that for any \(s\) and \(g\), there exist \(h^{(3)}_{\text{upd}}\) such that Theorem 4.1 holds with_
\[f^{(1)}_{\text{agg},\theta_{n}}=h^{(1)}_{\text{agg}},\quad f^{(1)}_{\text{ upd},\theta_{n}}=h^{(1)}_{\text{upd}}\]
\[f^{(l)}_{\text{agg},\theta_{n}}=h^{(2)}_{\text{agg}},\quad f^{(l)}_{\text{ upd},\theta_{n}}=h^{(2)}_{\text{upd}}\quad(l=2,\ldots,L_{\theta_{n}}-1)\]
\[f^{(L_{\theta_{n}})}_{\text{agg},\theta_{n}}=h^{(2)}_{\text{agg}},\quad f^{ (L_{\theta_{n}})}_{\text{upd},\theta_{n}}=h^{(3)}_{\text{upd}}.\]
Proof Sketch.: From the proof of Theorem 4.1, most of the layers in \(\theta_{i}\) are used for estimating the stationary distribution, which can be realized by a repetition of the same layer.
This theorem shows that the number of functions we need to learn is essentially five. This result indicates that learning the scale function has a good algorithmic alignment (Xu et al., 2020, Definition 3.4). Moreover, these functions are the same regardless of the graph size. Therefore, in theory, one can fit these functions using small graphs and apply the resulting model to large graphs as long as the underlying law for the generation process, namely \(s\) and \(g\), is fixed. Note that the order of the logical quantification matters. As \(h^{(1)}_{\text{agg}},h^{(1)}_{\text{upd}},h^{(2)}_{\text{agg}},h^{(2)}_{ \text{upd}}\) are universal and are independent with the generation process, they can be learned using other graphs and can be transferred to other types of graphs. The construction of these layers (i.e., the computation of the stationary distribution) can also be used for introducing indicative biases to GNN architectures.
### Graph Neural Networks can Recover the Hidden Features
As we have estimated the scale function, it seems easy to recover the features by applying the Bellman-Ford algorithm with edge lengths, as we mentioned in Section 3.3, but this does not work well. The first obstacle is that there is a freedom of rigid transformation. As rotating and shifting
the true hidden features does not change the observed graph, we cannot distinguish hidden features that are transformed by rotation solely from the graph. To absorb the degree of freedom, we introduce the following measure of discrepancy of features.
**Definition 4.3**.: We define the distance between two feature matrices \(\mathbf{X},\mathbf{Y}\in\mathbb{R}^{n\times d}\) as
\[d_{G}(\mathbf{X},\mathbf{Y})\stackrel{{\text{def}}}{{=}}\min_{\begin{subarray} {c}\mathbf{P}\in\mathbb{R}^{d\times d}\\ \mathbf{P}^{\top}\mathbf{P}=I_{d}\end{subarray}}\frac{1}{n}\|\mathbf{C}_{n}\mathbf{X}-\mathbf{C}_{ n}\mathbf{Y}\mathbf{P}\|_{F}^{2}, \tag{2}\]
where \(\mathbf{C}_{n}\stackrel{{\text{def}}}{{=}}(\mathbf{I}_{n}-\frac{1}{n} \mathbbm{1}_{n}\mathbbm{1}_{n}^{\top})\in\mathbb{R}^{n\times n}\) is the centering matrix, \(\mathbf{I}_{n}\in\mathbb{R}^{n\times n}\) is the identity matrix, and \(\mathbbm{1}_{n}\in\mathbb{R}^{n}\) is the vector of ones. We say that we recover the hidden features \(\mathbf{X}\) if we obtain features \(\mathbf{Y}\) such that \(d_{G}(\mathbf{X},\mathbf{Y})<\varepsilon\) for sufficiently small \(\varepsilon>0\).
In other words, the distance is the minimum average distance between two features after rigid transformation. This distance is sometimes referred to as the orthogonal Procrustes distance (Hurley and Cattell, 1962; Schonemann, 1966; Sibson, 1978), and can be computed efficiently by SVD (Schonemann, 1966). Note that if one further wants to recover the rigid transformation factor, one can recover it in a semi-supervised manner by the Procrustes analysis.
The second obstacle is that GNNs cannot distinguish nodes. A naive solution is to include unique node ids in the node features. However, this leads the number of dimensions of node features to infinity as the number of nodes tends to infinity. This is not desirable for learning and generalization of the size of graphs. Our solution is to randomly select a constant number \(m\) of nodes and assign unique node ids only to the selected nodes2. Specifically, let \(m\in\mathbb{Z}_{+}\) be a constant hyperparameter, and we first select \(m\) nodes \(\mathcal{U}=[u_{1},\ldots,u_{m}]\subset V\) uniformly and randomly and set the input node features \(\mathbf{x}_{v}\in\mathbb{R}^{2+m}\) as
Footnote 2: From a technical point of view, this is a critical difference from existing analyses (Loukas, 2020; Sato et al., 2021; Abboud et al., 2021), which assume unique node ids. Our analysis strikes an excellent trade-off between a small complexity (a constant dimension) and a strong expressive power (precise recovery).
\[\mathbf{x}_{v}^{\text{syn}}=\begin{cases}[d_{v},n,\mathbf{e}_{i}^{\top}]^{\top}&(v=u_{ i})\\ [d_{v},n,\mathbf{0}_{m}^{\top}]^{\top}&(v\not\in\mathcal{U})\end{cases}, \tag{3}\]
where \(\mathbf{e}_{i}\in\mathbb{R}^{m}\) is the \(i\)-th standard basis, and \(\mathbf{0}_{m}\) is the vector of zeros. Importantly, this approach does not increase the number of dimensions even if the number of nodes tends to infinity because \(m\) is a constant with respect to \(n\). We show that we can accurately estimate the distance structure and the hidden features by setting an appropriate number of the selected nodes.
**Theorem 4.4**.: _For any \(s\) and \(g\) that satisfy Assumptions 1-5, for any \(\varepsilon,\delta>0\), there exist \(m\) and \(\theta_{1},\theta_{2},\ldots\) such that with the explicit node features \(\mathbf{X}\) defined by Eq. (3),_
\[\text{Pr}\left[\limsup_{n\rightarrow\infty}d_{G}(\hat{\mathbf{Z}}_{\theta_{n}}, \mathbf{Z})<\varepsilon\right]>1-\delta,\]
_where \(\hat{\mathbf{Z}}_{\theta_{n}}=[f(v_{1},G_{n},\mathbf{X};\theta_{n}),\ldots,f(v_{n},G_ {n},\mathbf{X};\theta_{n})]\in\mathbb{R}^{n\times d}\) is the estimated hidden features by GNN \(\theta_{i}\), and \(\mathbf{Z}=[\mathbf{z}_{1},\ldots,\mathbf{z}_{n}]\in\mathbb{R}^{n\times d}\) is the true hidden features. The probability is with respect to the draw of samples \(\mathbf{z}_{1},\mathbf{z}_{2},\ldots\) and the draw of a random selection of \(\mathcal{U}\)._
Proof Sketch.: We prove this theorem by construction. We estimate the threshold function \(s\) by Theorem 4.1 and compute the shortest path distances from each selected node in \(\mathcal{U}\) with the estimated edge lengths. The computation of shortest path distances can be done by GNNs (Xu et al., 2020). After this process, each node has the information of the (approximate) distance matrix among the selected nodes, which consists of \(m^{2}\) dimensions. We then run multidimensional scaling in each node independently and recover the coordinates of the selected nodes. Lastly, the selected nodes announce their coordinates, and the non-selected nodes output the coordinates of the closest nodes in \(\mathcal{U}\). With sufficiently large \(m\), the selected nodes \(\mathcal{U}\) form an \(\varepsilon^{\prime}\)-covering of \(\mathcal{D}\) with high probability, and therefore, the mismatch of the non-selected nodes is negligibly small. The full proof can be found in Appendix B.
As in Theorem 4.1, the statement of Theorem 4.4 does not bound the number of layers. However, as in Theorem 4.2, Theorem 4.4 can also be realized with a fixed number of functions.
**Theorem 4.5**.: _For any \(s\) and \(g\), there exist \(h^{(1)}_{\text{agg}}\), \(h^{(1)}_{\text{qupl}}\)\(\ldots\), \(h^{(8)}_{\text{agg}}\), \(h^{(8)}_{\text{upd}}\) such that Theorem 4.4 holds with these functions._
Therefore, the number of functions we need to learn is essentially a constant. This fact indicates that learning the hidden features has a good algorithmic alignment (Xu et al., 2020, Definition 3.4). Besides, the components of these functions, i.e., computation of the stationary distribution, shortest-path distances, and multidimensional scaling, are differentiable almost everywhere. Here, we mean by almost everywhere the existence of non-differentiable points due to the min-operator of the shortest-path algorithm. Strictly speaking, this is no more differentiable than the ReLU function is, but can be optimized in an end-to-end manner by backpropagation using auto-differential frameworks such as PyTorch.
**Remark (GNNs with Input Node Features).** Many of graph-related tasks provide node features \(\{\mathbf{x}_{v}^{\text{given}}\mid v\in V\}\) as input. Theorem 4.4 shows that GNNs with \(\mathbf{x}_{v}^{\text{syn}}\) as explicit node features can recover \(\mathbf{z}_{v}\), where \(\mathbf{x}_{v}^{\text{syn}}\) is defined by Eq. (3). Thus, if we feed \(\mathbf{x}_{v}=[\mathbf{x}_{v}^{\text{given}\top},\mathbf{x}_{v}^{\text{syn}\top}]^{\top}\) to GNNs as
explicit node features, GNN can implicitly use both of \(\mathbf{x}_{v}^{\text{given}}\) and \(\mathbf{z}_{v}\).
The most straightforward method for node classification is to apply a feed-forward network to each node independently,
\[\hat{y}_{v}^{\text{MLP}}=h_{\theta}(\mathbf{x}_{v}^{\text{given}}). \tag{4}\]
This approach is fairly strong when \(\mathbf{x}_{v}^{\text{given}}\) is rich (NT & Maehara, 2019) but ignores the graph. Our analysis shows that GNNs can classify nodes using both \(\mathbf{x}_{v}^{\text{given}}\) and \(\mathbf{z}_{v}\), i.e.,
\[\hat{y}_{v}^{\text{GNN}}=h_{\theta}(\mathbf{x}_{v}^{\text{given}},\mathbf{z}_{v}). \tag{5}\]
Comparison of Eqs. (4) and (5) highlights a strength of GNNs compared to feed-forward networks.
## 5 Experiments
In the experiments, we will validate the theorems by empirically showing that GNNs can recover hidden features solely from the input graph.
**Datasets.**: We use the following synthetic and real datasets.
**Two-Moon**: is a synthetic dataset with a two-moon shape. We construct a \(k\)-nearest neighbor graph with \(k=\text{floor}(\frac{1}{10}n^{1/2}\log n)\), which satisfies Assumption 4. As we know the ground-truth generation process and can generate different graphs with the same law of data generation, we use this dataset for validating the theorems and showing generalization ability in an inductive setting.
**Adult**: is a real consensus dataset. We use age and the logarithm of capital gain as the hidden features and construct a \(k\)-nearest neighbor graph, i.e., people with similar ages and incomes become friends.
**Settings.**: We use two different problem settings.
**Transductive**: setting uses a single graph. We are given the true hidden features for some training nodes, and estimate the hidden features of other nodes. We use \(70\) percent of the nodes for training and the reset nodes for testing.
Figure 3: **Results for the Transductive Setting.** Overall, the proposed method succeeded in recovering the ground truth hidden features while tSNE to \(\mathbf{X}\) (Eq. (3)) fails, and GNNs and GATs are mediocre. (Top Left) The ground truth hidden embeddings. The node ids are numbered based on the x-coordinate and shown in the node colors. These node ids are for visualization purposes only and are NOT shown to GNNs and downstream algorithms. (Top Mid) The input graph constructed from the hidden features. The positions of the visualization are NOT shown to GNNs. (Top Right) tSNE plot on the synthetic node features, i.e., Eq. (3). These results indicate that the node features are not informative for feature recovery, which introduces challenges to the task. (Bottom Left) The recovered features by the proposed method. They resemble the ground truth not only with respect to the cluster structure but also the x-coordinates (shown in the node colors), the curved moon shapes in the two-moon dataset, and the striped pattern in the Adult dataset. The \(d_{G}\) value (Eq. (2)) is small, which indicates the success of the recovery and validates the theory. (Bottom Mid) The recovered features by GNNs. They do not resemble the true hidden features. The \(d_{G}\) value is mediocre. (Bottom Right) The recovered features by GATs. They do not resemble hidden features, but some clusters are detected (shown in the node colors). The \(d_{G}\) value is mediocre. These results show that existing GNNs can extract some information from the graph structure, but they do not fully recover the hidden features.
**Inductive**: setting uses multiple graphs. In the training phase, we are given two-moon datasets with \(n=1000\) to \(5000\) nodes and their true hidden features. In the test phase, we are given a new two-moon dataset with \(n=10000\) nodes and estimate the hidden features of the test graph. This setting is challenging because (i) we do not know any hidden features of the test graphs, and (ii) models need to generalize to extrapolation in the size of the input graphs.
**Methods.** As we prove the theorems by construction and know the configuration of GNNs that recover the hidden features except for the unknown parameters about the ground truth data (i.e., the scale \(g_{n}\) and the constant \(c\) that depends on \(p\) and \(\bar{s}\)), we use the model architecture that we constructed in our proof and model the unknown parameters, i.e., scaling factor \(g_{n}\), using \(3\)-layer perceptron with hidden \(128\) neurons that takes \(n\) as input and output \(g_{n}\). This model can be regarded as the GNNs with the maximum inductive bias for recovering the hidden features. We fix the number of the selected nodes \(m=500\) throughout the experiments.
**Baselines.** We use \(3\)-layer Graph Attention Networks (GATs) (Velickovic et al., 2018) and Graph Isomorphism Networks (GINs) (Xu et al., 2019) as baselines. We feed the same explicit node features as in our method, i.e., Eq. (3), and use the hidden features as the target of the regression.
**Details.** We optimize all the methods with Adam (Kingma & Ba, 2015) with a learning rate of \(0.001\) for \(100\) epochs. The loss function is \(d_{G}(\mathbf{\hat{Z}}_{\theta}[\text{train-mask}],\mathbf{Z}[\text{train-mask}])\), where \(\mathbf{\hat{Z}}_{\theta}\in\mathbb{R}^{n\times d}\) is the output of GNNs, \(\mathbf{Z}\) is the ground truth hidden embeddings, and train-mask extracts the coordinates of the training nodes.
**Results.** Figures 3 and 4 show the results. As the rigid transformation factor cannot be determined, we align the recovered features using the orthogonal Procrustes analysis in the postprocessing. We make the following observations.
**Observation 1**.: **Recovery Succeeded.** As the lower left panels show, the proposed method succeeds in recovering the hidden features solely from the input graphs. Notably, not only coarse structures such as connected components are recovered but also details such as the curved moon shape in the two-moon dataset and the striped pattern in the Adult dataset are recovered.
**Observation 2**.: **Existing GNNs are mediocre.** As the lower right panels show, GINs and GATs extract some information from the graph structure, e.g., they map nearby nodes to similar embeddings as shown by the node colors, but they fail to recover the hidden features accurately regardless of their strong expressive powers. This is primarily because the input node features contain little information, which makes recovery difficult. These results highlight the importance of inductive biases of GNNs to exploit the hidden features.
**Observation 3**.: **tSNE fails.** The upper right panels show that tSNE on the explicit node features \(\mathbf{X}\) failed to extract meaningful structures. These results indicate that the synthetic node features (Eq. (3)) do not tell anything about the hidden features, and GNNs recover the hidden features solely from the graph.
**Observation 4**.: **Recovery Succeeded in the Inductive Setting.** Figure 4 shows that the proposed method succeeded in the inductive setting as well. This shows that the ability of recovery can be transferred to other graph sizes as long as the law of data generation is the same.
## 6 Conclusion
In this paper, we showed that GNNs can recover the hidden node features, which contain all information about the graph structure, solely from the graph input. These results provide a different perspective from the existing results, which indicate that GNNs simply mix and smooth out the given node features. In the experiments, GNNs accurately recover the hidden features in both transductive and inductive settings.
|
2304.04125 | Training Neural Networks for Execution on Approximate Hardware | Approximate computing methods have shown great potential for deep learning.
Due to the reduced hardware costs, these methods are especially suitable for
inference tasks on battery-operated devices that are constrained by their power
budget. However, approximate computing hasn't reached its full potential due to
the lack of work on training methods. In this work, we discuss training methods
for approximate hardware. We demonstrate how training needs to be specialized
for approximate hardware, and propose methods to speed up the training process
by up to 18X. | Tianmu Li, Shurui Li, Puneet Gupta | 2023-04-08T23:59:54Z | http://arxiv.org/abs/2304.04125v1 | # Training
###### Abstract.
Approximate computing methods have shown great potential for deep learning. Due to the reduced hardware costs, these methods are especially suitable for inference tasks on battery-operated devices that are constrained by their power budget. However, approximate computing has't reached its full potential due to the lack of work on training methods. In this work, we discuss training methods for approximate hardware. We demonstrate how training needs to be specialized for approximate hardware, and propose methods to speed up the training process by up to 18X.
Machine learning, neural network, approximate computing, stochastic computing, analog computing, compute-in-memory, photonics accelerator +
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Information Systems Systems
+
Footnote †: journal: Journal: Information Systems Systems
+
Footnote †: journal: Journal: Information Systems Systems
+
Footnote †: journal: Journal: Information Systems Systems
+
Footnote †: journal: Journal: Information Systems Systems
+
Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal:: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal Journal: Journal
Approximate multiplication introduces errors in multiplications by design and can have very high errors for specific input combinations (Beng et al., 2017). Limited by the array size, in many cases, an analog accelerator cannot compute the entire convolution in a single array, instead, partial sums will be computed (Beng et al., 2017). The partial sum will then be quantized by the ADC before further accumulation in the digital domain since ADCs have limited precision and range. However in normal training partial sums will not be quantized, and can introduce significant errors if ADC bitwidth is low, making learned weights invalid for the actual hardware. All of these effects need to be taken care of during training. As shown in Tab. 4, running a model pretrained for fixed-point computation directly with approximate computing can drop accuracy by 8-57% pts compared to modeling the computation properly during training.
Modeling approximate computing accurately in the forward pass is expensive compared to accurate multiplication and addition, as shown in Tab. 1. In practice, the runtime difference can be even larger, since floating-point conv2d/linear layers can use optimized kernels from libraries like cuDNN (Beng et al., 2017), while functions to model approximate computing methods require additional coding (i.e., implementing custom kernels). As a result, **specialized methods to reduce training time are necessary for approximate hardware**.
### Previous works
on training for approximate hardware.
Due to the difficulties in modeling computation errors during training, inaccurate computing methods have not been able to fully utilize their benefits in previous works. For analog computing, there are a few works discussing the quantization of partial sums (or ADCs) of PIMs (Beng et al., 2017; Wang et al., 2018) and photonic accelerators (Beng et al., 2017; Wang et al., 2018). However, the main objective of these works is accuracy optimization using their proposed quantization methods and they usually focus on simpler datasets and models so that runtime is relatively manageable. The runtime issue is not addressed in these works, which is the primary focus of our work.
Previous works have also tried to model the computation error accurately during training, but the expensive emulation cost prevents training of more complicated models (Han et al., 2017) in the case of stochastic computing, or limits the approach to computation suitable for mapping to commercial training hardware (typically GPUs) (Beng et al., 2017) in the case of approximate multiplication. Other works try to reduce the error of approximation so that models trained for fixed-point computation can be used directly for inference. For stochastic computing, SC additions are replaced with fixed-point additions to reduce error (Kang et al., 2018). For approximate multiplication, approximation is limited to locations that don't affect model accuracy too much (Wang et al., 2018; Wang et al., 2018). Since approximate computing relies on the error for performance, reducing the error comes at the cost of diminished performance benefits.
## 3. Methodology
Our training improvements include three components: activation approximation for non-linear computation, error injection during training with fine-tuning, and gradient checkpointing. We will discuss the three components in detail in this section. We will use the smaller CIFAR-10 dataset for most of this section due to the runtime limitations on larger datasets. This setup limits the performance benefits of our methods and also makes the runtime results less reliable due to large run-to-run variations, but we will demonstrate the full performance benefits in Sec. 4. We focus on three types of approximate computing methods mentioned in Sec. 2 to showcase the benefits of our approaches:
* Stochastic computing. We use linear feedback shift registers for stream generation, AND gate for multiplication and OR for addition, which is similar to the setup in (Kang et al., 2018). We use 32-bit split-unipolar streams (64 total bits).
* Approximate multiplier. We use an approximate multiplier from EvoApproxLib (Kang et al., 2018). Specifically, we use the mul7u_09V setup. It is a 7-bit unsigned multiplier in the parental-optimal set for mean-relative error, which is suitable for neural networks and has 8-bit input precision when combining the sign bit. Compared to an accurate 7-bit multiplier, the approximate multiplier has 4X lower error and power consumption.
* Analog computing. To make a more generic argument, we set the effective array size of analog accelerators such that the partial sum of every convolution channel is quantized by the ADC. That is 9 for Resnet-tiny (Resnet-18 shrunk for TinyML applications used in (Beng et al., 2017)) and Resnet-18 (Beng et al., 2017), and 25 for TinyConv (a four-layer CNN used in (Chen et al., 2018)). Since analog accelerators usually only support positive inputs and weights, we use split-unipolar accumulation in our training setup to handle negative weights, which results in 2\(\times\) computation. We assume 4-bit ADCs are used for all evaluations in this paper. A relatively low ADC bitwidth is selected since the ADCs are usually the power bottlenecks of analog accelerators, therefore low-biwidth ADCs are always preferred if the accuracy impact is not significant. The bitwidth for inputs and weights is set to 8-bit for all cases to focus on the impact of partial sum quantization.
While there are other approximate hardware implementations, we limit our study to the above three methods as they cover a wide variety of approximate computing. Our method is applicable to other approximate hardware setups with minimal change.
### Approximation Proxy Activation
One issue of inaccurate computation is the non-linearity introduced to multiply-accumulate operations, which is separate from non-linear activation functions like ReLU. We use stochastic computing and analog computing as examples. Approximate multiplier does not suffer from the non-linearity issue, as error is only introduced during multiplication. For SC, if the two input values \(a\), and \(b\) are uncorrelated, the OR adder performs \(a+b-ab\) on average. For analog
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & Multiplication & Addition \\ \hline Floating point & 0.5(fused) & 0.5(fused) \\ \hline \multirow{2}{*}{Stochastic Computing (32-bit)} & 64(unrolled) & 64(unrolled) \\ & 2(packed) & 2(packed) \\ \hline Approximate Multiplication & 86 & 1 \\ \hline Analog Computing & 1 & 1(within channel) \\ & 9(between channel) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Relative multiplication and addition cost. FP32 multiplication and addition are used as the baseline. The number of operations in C++ is used as the cost value.
computing, partial sum and output results need to be clamped and quantized. Accurately modeling the imperfections in computation can be costly in the backward pass. For instance, the previously-mentioned OR adder requires tracking almost all inputs in the adder (\(\frac{\partial}{\partial a_{i0}}\text{OR}(a_{j})=\prod_{j\neq i}(1-a_{j})\)) during backpropagation, whereas accurate addition is much simpler in the backward pass (partial derivative=1). On top of being expensive to model, the nonlinearity can hinder convergence if it is not taken into consideration. Using activations as a proxy during backward pass is first proposed in(Friedman et al., 2017) to simulate the effect of SC OR accumulation. _Contrary to normal activation functions, the added activation proxy is only used during training in the back-ward pass and is not used during inference._ With proper implementation of the activation function, the overhead of the activation function can be negligible. As is shown in Tab. 2, modeling the non-linearity using an activation function is necessary. TinyConv is a four-layer CNN used in (Friedman et al., 2017), and Resnet-tiny is Resnet-18 shrunk for TinyML application used in (Beng et al., 2018). Models are trained with accurate modeling of stochastic and analog computing in the forward pass. For analog computing, accuracy is noticeably lower when trained without the activation function. For stochastic computing, training does not converge at all without the activation function.
Using an activation function requires the computation to be (mostly) associative. Take the previously used OR accumulation and analog computing as examples. Computation is broken up into positive and negative parts since both work on unipolar (positive-only) inputs. Accumulation within each part is associative, but subtracting the two parts isn't. This behavior can be visualized in Fig. 1. While the activation function can approximate unipolar OR accumulation, a single activation cannot be used to model the entire accumulation when subtraction is factored in. As such, accumulations need to be split into positive and negative parts in the backward pass, so that each part can be modeled using accurate accumulation with an activation function. Similar effects can be seen in the analog computing setup. Since the positive and negative parts saturate individually, a single activation function is also insufficient. Tab. 3 lists the activation functions used for stochastic computing and analog computing.
### Error injection
Despite resolving the issue of backpropagation, the activation function method cannot fully replace the forward propagation. Training models for non-floating-point computation typically involves modeling the computation accurately in the forward pass, be it low-precision fixed-point computation (including extreme precision like binarization (Friedman et al., 2017)) or approximate computation (Beng et al., 2018). For fixed-point computation, modeling the computation in the forward pass is relatively cheap. An element-wise fake-quantization operator followed by a normal convolution/linear operator is sufficient. The same cannot be said for approximate computing methods. SC requires emulating the stream generation and bit-wise multiplication in hardware. The approximate multiplier described in Sec. 2 is made up of hundreds of lines of bit manipulation. Analog computing requires emulating the limited hardware array size and ADC precision during computation. Tab. 1 demonstrates the high cost of emulating approximate computing. In all cases, emulating the computation is expensive, requires additional coding, and cannot be completely overcome even with abundant programming resource.
While it is expensive to model approximate computing accurately in the forward pass, skipping modeling degrades accuracy, as shown in Tab. 4. Despite being sufficient for the backward pass, the activation method defined in Sec. 3.1 is not sufficient for the forward pass. To combat this limitation, we propose to replace accurate modeling with two types of error injection coupled with normal training.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{TinyConv} & \multicolumn{2}{c}{Resnet-tiny} \\ Method & Inference Only & With Model & Inference Only & With Model \\ \hline Stochastic Computing & 14.87\% & 72.18\% & 48.57\% & 79.76\% \\ Approximate Multiplication & 71.88\% & 83.35\% & 76.95\% & 85.25\% \\ Analog Computing (4th) & 58.43\% & 78.94\% & 47.15\% & 77.23\% \\ \hline \hline \end{tabular}
\end{table}
Table 4. Accuracy impact of modeling approximate computation. "With Model" models the approximate computation method accurately in the forward pass.
Figure 1. Activation modeling behavior of unipolar and bipolar (a) stochastic and (b) analog computation. The bipolar versions performs subtraction between two unipolar (positive and negative) inputs to achieve the full range. For analog computation, the ADC saturation is modeled as a clamp at 2 in this example, and other effects (e.g. size of accumulation) are not considered.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Setup & TinyConv & Resnet-tiny \\ \hline \multicolumn{3}{c}{Stochastic Computing} \\ No Activation & 41.64\% & 10.00\% \\ With Activation & 72.18\% & 79.76\% \\ \hline \multicolumn{3}{c}{Analog Computing (4-bit)} \\ No Activation & 74.85\% & 69.21\% \\ With Activation & 79.20\% & 77.23\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. Accuracy benefits of using activation functions.
_Type 1_.: The first type of error injection models the difference between activation function and accurate modeling as functions of the output values of a layer, and we use it for stochastic computing and approximate multiplication. Fig. 2 shows the error average and variation of the four layers of TinyConv using stochastic computing. Compared to the value expected from the simple activation function \(y=1-e^{-x}\), the actual layer output differs significantly. The variance (represented as "std") in the plots means that it is impossible to fully capture the computation details using only an activation function. Given that the activation function is only an approximation by design, it cannot capture all the characteristics of the inaccurate computation. On the other hand, the non-zero average error means that the activation function doesn't represent the target computation accurately on average. Activation functions are derived under certain assumptions. In the case of OR accumulation, the assumption is that the size of accumulation \(n\) is large and all input values are small and similar in value. These assumptions don't always hold in a real model, and the difference in average error between layers means it's impossible to have a single activation function for the entire model. For deep models, the error of the activation function accumulates and results in non-usable models after training. To resolve this issue, we propose to add correction terms to the activation function during training.
Take the error profile shown in Fig. 2 as an example. The mean and variance of the error are plotted with respect to the output after the activation function, and both appear to be smooth functions. From this observation, we model the average as a function of activated output values on top of the original activation function, and the variance as a normally distributed random error with variance dependent on the activated value. Both curves are fitted to a polynomial function that's different for each layer and calibrated 5 times per epoch. Though it is possible to lower it further, this frequency sufficiently amortizes the cost of calibration.
_Type 2_.: The second type of error injection is used in training analog accelerators. The method is similar to the mentioned error injection method used for stochastic computing and approximate multiplication, but with slight differences. In this case, the overall quantization error of output activations (the sum of individual partial sum quantization errors) is modeled and injected. We first perform an accurate forward pass for one batch, which quantizes the partial sum of every channel before accumulation. The accurate results are subtracted from the results separately generated by the normal Conv2d function (does not quantize partial sums) to obtain the actual overall quantization error. Then the mean and variance of the quantization error are calculated and stored for each convolution layer. Instead of obtaining the statistics on a per activation or per filter granularity, we only calculate a single mean and variance for an entire layer. There are two reasons for this choice: 1) empirical results suggest that the accuracy of a single mean and variance is better than calculating mean and variance at other granularity, and 2) memory saving, since only two values need to be stored per layer. We calibrate the error statistics every 10 batches, which achieves a good trade-off between accuracy and runtime. For the majority of batches that do not require calibration, we only compute a normal Conv2d in the forward pass (partial sums not quantized). Then we simulate the quantization error by generating random error according to a normal distribution with mean and variance set to the values obtained from the last calibration batch. The simulated quantization error is directly added to the Conv2d outputs. This way, for non-calibration batches, the runtime should be almost identical to normal convolution, since generating quantization error takes significantly less time compared to the convolution operation.
Using error injection in the place of accurate modeling improves accuracy compared to not modeling at all, as shown in Tab. 5. However, training with error injection alone is not sufficient to completely bridge the gap, especially for analog computing with a low ADC bitwidth. Later we will show the accuracy gap can be eliminated through a fine-tuning step. Runtime is improved up to 6.7X with error injection compared to using an accurate model during training, as shown in Tab. 7. In most cases, the error injection runtimes are comparable to those in normal training (the "Without Model" column). The slightly longer error injection runtime for analog computing is due to its higher calibration frequency. On relatively a larger network and dataset (Resnet-18 on Imagenet), the runtime improvement of error injection can be larger for analog computing (Shown in Sec. 4).
### Fine tuning
While the error injection mentioned in Sec. 3.2 cannot achieve the same accuracy as training with accurate modeling, it reduces the amount of fine-tuning with accurate modeling required to achieve the same accuracy. The goal of error injection is to simulate the error during training, which can make the model more robust to errors. However, since the injected error is randomly generated while the actual error is input-dependent, error injection cannot achieve the same accuracy as accurate modeling. Still, error injection makes the model more robust such that with a small amount of accurate modeling, the model weights can quickly converge to the optimal
Figure 2. Difference between outputs from stream computation and normal computation+activation.
point. Fig. (a)a compares the convergence behavior when error injection is combined with accurate modeling for SC when trained for CIFAR-10. For stochastic computing, error injection combined with 5 epochs of fine-tuning is sufficient to achieve the same accuracy as using an accurate model throughout the training. In contrast, convergence is poor without error injection, and the model does not converge to the same accuracy even with 20 epochs of fine-tuning.
Fig. (b)b compares the convergence behavior for approximate multiplier on the same model. Since the accuracy gap is smaller with approximate multiplier, training without error injection can also converge properly with 5 epochs of fine-tuning. However, error injection reduces that further to 2 epochs.
In the case of analog computing with 4-bit partial sum quantization, training with an accurate model usually converges between 10-15 epochs for the CIFAR-10 dataset from pretrained INT8 weights. It is not necessary to train from scratch using accurate model or error injection. Empirical results of analog computing training suggest that using one-fourth of an epoch can achieve the same accuracy compared to fine-tuning with a complete epoch. Therefore only the last one-fourth epoch is used for fine-tuning, which adds minimum runtime overheads. Unlike SC or approximate multiplication, fine-tuning cannot fully recover the accuracy gap for a low ADC bitwidth, but the accuracy drop compared to the accurate model is usually small (around 1%). Fig. (c)c compares the convergence behavior for analog computing, with calibration and fine-tuning, error injection can achieve similar accuracy compared to accurate model. Without error injection, even keeping the same calibration and fine-tuning setup as the error injection version, the training fails to converge to the same accuracy as the version with error injection.
Overall, fine-tuning on top of error injection removes the accuracy deficit of error injection alone. As shown in Tab. 5, accuracy after fine-tuning ("Fine-tuning" columns) is at most 1% pt lower than training completely using an accurate model.
### Gradient checkpointing
Both the activation function in Sec. 3.1 and error injection in Sec. 3.2 add additional steps in the computational graph during training, which increases memory requirement and limits batch size for larger models. To reduce memory consumption, we use the gradient checkpointing setup from (Beng et al., 2017). Since the added functions are pointwise functions and have low compute intensity (\(<\)20 OPS/memory access), checkpointing all added computations have minimal effect on runtime even when not memory bound. Tab. 6 compares the runtime and memory consumption with and without checkpointing for Resnet-18 on the ImageNet dataset. Checkpointing allows training with a larger batch size, which improves GPU efficiency and reduces runtime required per epoch by 22% in this case.
## 4. Results
In this section, we demonstrate the benefits of our techniques on a more complicated workload. We use Resnet-18 training on the ImageNet dataset to demonstrate the benefits. Models are trained on a single RTX 3090 using mixed precision. We use PyTorch 1.12 as the baseline framework, and implement the additional operators as CUDA C++ extensions, including accurate modeling for stochastic computing, approximate multiplication, and analog computing. While we make the best effort to optimize the kernels used for modeling approximate computing, we cannot guarantee that the implementations are optimal. Models are trained from a checkpoint provided by PyTorch to reduce training time, which uses fp32. Because of this, the number of epochs required to converge is different between the three approximate computing setups. Tab. 8 lists the detailed training setup.
Tab. 9 shows the accuracy achieved for all three setups. While there are no previous accuracy results that we can compare against, the accuracy results follow the same trend as that seen for smaller models discussed in Sec. 3. Approximate multiplication and analog computing take too long to train without our improvements, as we will show later.
\begin{table}
\begin{tabular}{l|c|c|c} Setup & Memory (MB) & Batch Size & Runtime (s/epoch) \\ \hline With Checkpoint & 19840 & 256 & 1326 \\ Without Checkpoint & 12766 & 128 & 1692 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Training runtime and memory requirement of stochastic computing without and without gradient checkpointing. We use the maximum batch size achievable which is a power of 2.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline \hline & \multicolumn{4}{c|}{TinyConv} & \multicolumn{4}{c}{Resnet-tiny} \\ \hline Method & Inference Only & With Model & Error Injection & Fine-tuning & Inference Only & With Model & Error Injection & Fine-tuning \\ \hline Stochastic Computing & 14.87\% & 72.18\% & 64.01\% & 73.09\% & 48.57\% & 79.76\% & 76.71\% & 81.29\% \\ Approximate Multiplication & 71.88\% & 83.35\% & 79.06\% & 83.05\% & 76.95\% & 82.55\% & 81.06\% & 84.85\% \\ Analog Computing (4b) & 58.43\% & 78.94\% & 62.02\% & 77.95\% & 47.15\% & 77.23\% & 71.10\% & 76.05\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Accuracy impact of error injection training.
\begin{table}
\begin{tabular}{l|c|c|c} Setup & Memory (MB) & Batch Size & Runtime (s/epoch) \\ \hline With Checkpoint & 19840 & 256 & 1326 \\ Without Checkpoint & 12766 & 128 & 1692 \\ \hline \hline \end{tabular}
\end{table}
Table 7. Runtime impact of error injection training. Shown is the time (seconds) required per epoch. Runtimes are measured on an RTX 3090 using TF32 precision and batch size=256. SC and approximate multiplication are slower due to the need to split positive and negative computations discussed in Sec. 3.1. Fine-tuning runtime is the same as the runtime with accurate model (the “With Model” column).
Tab. 10 showcases the performance benefits of the proposed methods when combined. The improvements shown here underestimate the benefits of the proposed methods due to the following factors:
* The "Without Improvements" column assumes that the activation function mentioned in Sec. 3.1 is used in the backward pass. If the activation function is not used, the backward pass also needs to be modeled accurately.
* Models are validated after each epoch. Since validation uses accurate modeling and is the same with and without improvements, the performance gap will be even more significant if the validation frequency is reduced.
Despite these limitations, our methods reduce end-to-end training time by 2.4X to 18.2X. For stochastic computing and approximate multiplication, the performance benefit roughly follows the runtime difference shown in Tab. 7, as runtime was bottleneck by accurate modeling even for smaller models. Approximate multiplication especially benefits from error injection, as iteration time reduces by 36.6X compared to accurate modeling. For analog computing, the runtime difference on ImageNet is larger than that on CIFAR-10, as both TinyConv and Resnet-tiny cannot fully utilize the GPU on CIFAR-10.
## 5. Conclusion
In this work, we propose several methods to improve training performance for inference on approximate hardware. Through the use of activation modeling, error injection with fine-tuning, and gradient checkpointing, we achieve convergence on a wide range of approximate hardware. Our method makes it feasible to train large models like Resnet-18 on the ImageNet dataset using a single consumer GPU.
|
2308.12492 | Optimizing Neural Network Scale for ECG Classification | We study scaling convolutional neural networks (CNNs), specifically targeting
Residual neural networks (ResNet), for analyzing electrocardiograms (ECGs).
Although ECG signals are time-series data, CNN-based models have been shown to
outperform other neural networks with different architectures in ECG analysis.
However, most previous studies in ECG analysis have overlooked the importance
of network scaling optimization, which significantly improves performance. We
explored and demonstrated an efficient approach to scale ResNet by examining
the effects of crucial parameters, including layer depth, the number of
channels, and the convolution kernel size. Through extensive experiments, we
found that a shallower network, a larger number of channels, and smaller kernel
sizes result in better performance for ECG classifications. The optimal network
scale might differ depending on the target task, but our findings provide
insight into obtaining more efficient and accurate models with fewer computing
resources or less time. In practice, we demonstrate that a narrower search
space based on our findings leads to higher performance. | Byeong Tak Lee, Yong-Yeon Jo, Joon-Myoung Kwon | 2023-08-24T01:26:31Z | http://arxiv.org/abs/2308.12492v1 | # Optimizing Neural Network Scale for ECG Classification
###### Abstract
We study scaling convolutional neural networks (CNNs), specifically targeting Residual neural networks (ResNet), for analyzing electrocardiograms (ECGs). Although ECG signals are time-series data, CNN-based models have been shown to outperform other neural networks with different architectures in ECG analysis. However, most previous studies in ECG analysis have overlooked the importance of network scaling optimization, which significantly improves performance. We explored and demonstrated an efficient approach to scale ResNet by examining the effects of crucial parameters, including layer depth, the number of channels, and the convolution kernel size. Through extensive experiments, we found that a shallower network, a larger number of channels, and smaller kernel sizes result in better performance for ECG classifications. The optimal network scale might differ depending on the target task, but our findings provide insight into obtaining more efficient and accurate models with fewer computing resources or less time. In practice, we demonstrate that a narrower search space based on our findings leads to higher performance.
xxx:1-29, 2023
## 1 Introduction
Electrocardiograms (ECGs) are non-invasive diagnostic tests that record the electrical activity of the heart over time. They are used to detect and analyze various heart-related conditions. Over the years, several machine learning models have been developed for ECG analysis, including recurrent neural networks (RNNs) (Sevilmis et al. (2017)), Transformers (Yan et al. (2020)), and convolutional neural networks (CNNs) (Rajpurkar et al. (2017)). Although ECG signals are considered multivariate time-series data (Figure 1), recent studies predominantly employ CNN-based models instead of RNN-based models (Sun et al. (2023); Vaid et al. (2023)) breaking the convention.
By adopting such models, they have achieved higher performance in ECG classifications (Zheng et al. (2020); Elias et al. (2022); Sun et al. (2023)). Despite their success, most previous studies have overlooked the importance of optimization in scaling networks.
by adjusting its scaling parameters. This process is critical for developing efficient and accurate models (Koutini et al. (2021)). In ECG classification, there is a single previous study evaluating the effect of scaling parameters of neural networks. It solely focuses on one aspect (e.g., the depth of the network) of network architecture (Nonaka and Seita (2021)).
To identify the importance of network scaling across various aspects, we examined the hyperparameters affecting network scale in ECG analysis. Before delving into the significance of scaling parameters, we first searched for widely used neural networks for ECG classifications. As a result, the residual neural network (ResNet) (Tai et al. (2017)) was chosen as a fundamental architecture because it is a typical model (Rajpurkar et al. (2017); Nonaka and Seita (2021); Elias et al. (2022)) used in ECG classifications.
It is widely acknowledged that the depth of network layers, the convolution kernel size, and the number of input/output channels are the primary scaling parameters that influence ResNet's scale and complexity. We conducted experiments to see how varying combinations of such parameters impacted the performance of ECG classification models on the Physionet2021 (Reyna et al. (2021)) and Alibaba (2021) datasets.
Our experiments show that models with a shallower network, a larger number of channels, and smaller convolution kernel sizes improve performance. This finding is significant because the performance changes depending on scaling parameters in ECG classification differ from those observed in image classifications (Zagoruyko and Komodakis (2016)). Furthermore, optimizing network scale can be both time-consuming and require considerable computing resources. However, to attain better performance within a given time, configuring a narrow search space for scaling parameters based on our findings helps models approach the best possible outcome more efficiently. Additionally, we examined the performance variation caused by the optimization with respect to the labels of the dataset and investigated the reasons behind our findings.
#### Generalizable Insights about Machine Learning in the Context of Healthcare
Determining the appropriate network architecture scale is critical for classification tasks. However, there hasn't been extensive research on evaluating network architecture scaling
Figure 1: Example of 12-lead ECG signal.
for analyzing ECGs. In our paper, we present a comprehensive experimental assessment of the performance of different network architecture scaling options for ECG classifications. Our findings reveal that models with a shallower network, a larger number of channels, and a smaller kernel size improve performance in ECG classifications. Interestingly, this finding differs from the existing knowledge in the computer vision domain. We claim that this could be a new guideline for efficiently tuning network scaling parameters in ECG analysis. Furthermore, we show that applying our findings to the hyperparameter optimization process improves the performance of ECG classifications. These results provide insights into practical architecture scaling in neural networks. We hope that these observations will assist clinical and engineering researchers in utilizing deep learning for ECG classification.
## 2 Problem Definition
Although the optimization of network scaling is crucial due to its significant impact on model performance, existing ECG studies have overlooked this aspect (A). Since ResNet (Tai et al. (2017)) is widely used for ECG analysis as a deep learning model (Rajpurkar et al. (2017); Zheng et al. (2020); Nonaka and Seita (2021); Elias et al. (2022); Sun et al. (2023)), we focused on scaling parameters for ResNet. We investigated scaling parameters to optimize a model for ECG classifications. We first present the fundamental architecture of ResNet in detail1.
Footnote 1: We only consider one-dimensional layers, such as one-dimensional convolution, one-dimensional batch normalization, and so on, because ECG is a type of time-series data.
### Fundamental Architecture
Equation 1 shows the architecture of ResNet.
\[y=(\text{FC}\circ\text{GAP}\circ R_{4}\circ R_{3}\circ R_{2}\circ R_{1}\circ S )(x), \tag{1}\]
where \(S\) is a single stem block and (\(R_{n}\)) is a residual block, _GAP_ is a global average pooling layer, and _FC_ is a fully connected layer.
The stem block \(S\) is composed of a single unit function \(F\) with a max pooling layer, as illustrated in Equation 2.1. The function \(F\) consists of a one-dimensional convolution, batch normalization, and ReLU activation function. Given that there are 12 leads in an ECG, the number of input channels for convolution is set at 12 (i.e., \(w\in\mathbb{R}^{kernel\times 12\times C_{out}}\)).
\[F(x,w)=\sigma(\text{BN}(\text{Conv}(x,w))),\quad w\in\mathbb{R}^ {kernel\times C_{in}\times C_{out}}\] \[S=\text{Pool}(F(x,w))), \tag{2}\]
where \(x\) represents the input and \(w\) denotes convolution weights, which are defined by the product of the kernel size (\(kernel\)), the number of input channels (\(C_{in}\)), and the number of output channels (\(C_{out}\)). _Conv_ consists of a single one-dimensional convolution, _BN_ refers to a batch normalization layer, \(\sigma\) is a ReLU activation function, and _Pool_ is a max pooling layer.
The residual block \(R\) consists of multiple residual layer \(L\) as shown in Equation 3
\[R=(L_{d}\circ\cdots\circ L_{1})(x), \tag{3}\]
where \(d\) is the layer depth.
Equation 4 illustrates the structure of \(L\), which is composed of two sequential function \(F\). In the first residual layer \(L\), except for the initial residual block, the first function \(F\) doubles the number of output channels through a convolution _Conv_ compared to the number of input channels. Subsequent \(F\)s maintain an equal number of input and output channels. The input \(x\) is skip-connected with the output after the second batch normalization layer and then passed through the activation function. Prior to the skip-connection, the initial \(L\) in \(R_{n}\) processes \(x\) using _Pool_ and _Conv_, while the remaining layers perform identity mapping.
\[L=\begin{cases}F(F(x,w)),2w)+\text{Pool}(\text{Conv}(x))&\text{if }n>1,d=1\\ F(F(x,2w)),2w)+x&\text{otherwise}\end{cases} \tag{4}\]
where \(w\) is convolution weights, \(n\) is an index of blocks, and \(d\) is an index of layers in a block, respectively.
### Scaling Parameters on Neural Network
There are three hyperparameters that influence the network scale: the depth of layers (_depth_\(D\)), the number of convolution channels (_channels_\(C\)), and the size of the convolution kernel (_kernel size_\(K\)). To examine the impact of these scaling parameters on optimization performance, we first established their search spaces. For the number of layers (layer depth \(D\)), we considered the minimum \(D\) as two, based on ResNet18 (He et al. (2016)), the smallest variation among the ResNet models. We thus set the search space for the depth of residual layers to \(D\in 2,4,8,16\), where each corresponds to a total of 18, 34, 66, and 130 convolution layers, respectively.
For ECG analysis as well as tasks in the computer vision domain, small odd numbers are typically used as the kernel size \(K\). The smallest kernel size is three, so we set the range to \(K\in 3,5,9,15\).
For the number of channels (\(C\)), most neural networks typically set the final number of output channels to 512 (He et al. (2016); Elias et al. (2022)). We were curious about how adjusting the number of channels would affect the performance on ECG classifications. We considered the search space for the final output channels to be 128, 256, 512, and 1024, which corresponds to \(C\in 16,32,64,128\).
Table 1 summarizes the network scale by layer depth \(D\), kernel size \(K\), and the number of channels \(C\). \(length\) denotes the input ECG length. The number of weights increases as the network becomes deeper. The first output has a quarter of the original length due to the striding of the convolution and a pooling layer at the stem block. As the input passes through the residual blocks, the length is halved while the number of channels is doubled. The number of labels depends on the labels of the ECG datasets.
## 3 Experiment
### Dataset
We employed two ECG datasets in our experiment. The first dataset is Physionet Challenge 2021 (Physionet) dataset (Reyna et al. (2021)). This dataset contains standard 12-lead ECGs with cardiac arrhythmia labels. It is composed of seven databases collected from various institutions with unique demographics. In our experiments, we omitted two of the seven databases that had fewer than several hundred samples and focused on five databases: PTB-XL (Wagner et al. (2020)), CPSC (Liu et al. (2018)), Shaoxing (Zheng et al. (2020)), G12EC (Reyna et al. (2021)), and Ningbo (Zheng et al. (2020)). Each database is collected from the United States (G12EC), Germany (PTB-XL), and China (CPSC, Shaoxing, Ningbo), with the signal length varying between 10 and 60 seconds. The total number of ECGs is approximately 88,000, and each ECG is associated with one or more labels among the 26 diagnostic classes, which cover a wide range of categories. The second one is Alibaba Tianchi competition (Alibaba) dataset (Tianchi (2020)). The ECG signals have a duration of 10 seconds each, and the overall collection consists of approximately 20,000. Each ECG is tagged with one or more labels from 33 classes. Since a few class include extremely small number of class (e.g. only three ECGs are labeled as QRS Low Voltage), we limited our study to labels with more than prevalence of 0.1%, and thus used 17 classes for experiments. More details of datasets are provided in Appendix B.
### Evaluation
We use the macro-average F1 score as the evaluation metrics. The F1 score is a harmonic mean of precision and recall. In multi-label classification, each sample belongs to multiple
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Output size & Weights \\ \hline Input & \(length\times 12\) & \\ \(S\) & \(length/4\times C\) & \(K\times 12\times C\) \\ \(R_{1}\) & \(length/8\times C\) & \(\left[\begin{array}{ll}\left\{K\times C\times C&\text{if }d=1\\ K\times C\times C&\text{otherwise}\end{array}+K\times C\times C\;\right\}\right]\)\(\times D\) \\ \(R_{2}\) & \(length/16\times 2C\) & \(\left[\begin{array}{ll}\left\{K\times C\times 2C&\text{if }d=1\\ K\times 2C\times 2C&\text{otherwise}\end{array}+K\times 2C\times 2C\;\right\}\right]\)\(\times D\) \\ \(R_{3}\) & \(length/32\times 4C\) & \(\left[\begin{array}{ll}\left\{K\times 2C\times 4C&\text{if }d=1\\ K\times 4C\times 4C&\text{otherwise}\end{array}+K\times 4C\times 4C\;\right\}\right]\)\(\times D\) \\ \(R_{4}\) & \(length/64\times 8C\) & \(\left[\begin{array}{ll}\left\{K\times 4C\times 8C&\text{if }d=1\\ K\times 8C\times 8C&\text{otherwise}\end{array}+K\times 8C\times 8C\;\right\}\right]\)\(\times D\) \\ GAP & \(8C\) & \\ FC & \# of labels & \(8C\times\text{\# of labels}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Network scale of the fundamental architecture. \(K\) denotes kernel size, \(C\) does the number of channels, \(d\) does the depth of residual layers, and \(length\) does a input length of ECGs
classes. The macro-average F1 score calculates the F1 score for each class independently and then averages them. This approach treats all classes equally, regardless of class distribution, making it particularly useful for the imbalanced dataset. We divided the dataset into train, validation, and test sets at a ratio of 0.7, 0.15, and 0.15, respectively.
### Training Procedure
As ECGs are gathered with varying lengths and sampling rates, we standardize them. For signal preprocessing, we trimmed the ECG length to 10 seconds. If the length of the ECG is longer than 10 seconds, we randomly cropped it to a 10-second duration; otherwise, we extended the ECG to 10 seconds by padding with zeros. Additionally, we resampled all ECGs to a sampling rate of 250Hz. In addition, for the Alibaba dataset, out of the standard 12 leads of an ECG, this dataset does not contain leads III, aVR, aVL, and aVF. Thus, by applying the principles of Einthoven's Triangle and Goldberger's augmented lead (Mirvis and Goldberger (2001)), we filled in the four missing leads using leads I and II.
Hyperparameter tuning is essential for selecting an appropriate model for a given task. Without hyperparameter tuning, the performance and generalizability of a model may be suboptimal, leading to unreliable conclusions (Bergstra and Bengio (2012)). However, computational limitations have prevented previous studies from exploring a wide range of hyperparameters (Strodthoff et al. (2020)). For example, one of the previous studies on ECG classification benchmark only searched 9 sets of learning rate and batch size (Nonaka and Seita (2021)), and the majority of studies do not present the precise procedure of the hyperparameter optimization.
In this study, we explored 50 combinations of hyperparameters for each set of scaling parameters, thereby resolving a limitation of previous studies. Tuned hyperparameters consist of learning rate, weight decay, dropout, and other regularization techniques. The search space used in the experiment is presented in Table 2. In training, we employed the Adam optimizer (Kingma and Ba (2014)) and a one-cycle learning rate scheduler (Smith and Topin (2019)) that peaks at epoch 10. The batch size was fixed at 512. We randomly sampled the learning rate \(\in[10^{-4},10^{-2}]\) and weight decay \(\in[10^{-6},10^{-4}]\) for the optimizer. The dropout rate was randomly chosen from 0 to 0.3 in increments of 0.05. We applied 18 ECG data augmentation methods (Lee et al. (2022)) using the RandAugment policy (Cubuk et al. (2020)). The number of augmentation methods was randomly selected from \(0,1,2\), and their intensity was chosen from 10 levels. Furthermore, we employed Mixup (Zhang et al. (2017)), randomly selecting a beta distribution from \(\beta\in\{0.0,0.1,0.2\}\). During hyperparameter searching, we used the asynchronous successive halving algorithm (ASHA) with a grace period of 10 and a reduction factor of 2 (Li et al. (2018)). Our training procedure is implemented with Ray (Moritz et al. (2018)).
## 4 Result
### Impact of Optimization for Scaling Parameter on ECG classification
Figure 2 shows the F1 score of the ECG classification for ResNet with different layer depths (\(D\)), numbers of channels (\(C\)), and kernel sizes (\(K\)). The selected hyperparameters for each
network are provided in Appendix C. In the box color, the more red it is, the higher the performance, while the more blue it is, the lower the performance.
**Layer depth** (**D**): We observed a general trend in which the performance tends to improve as the depth \(D\) decreases from both Physionet 2021 and Alibaba datasets. The trend is found regardless of kernel size \(K\) and channel \(C\). This findings clearly exhibit a distinct different pattern compared to computer vision domain, where it is known that performance increases as networks become deeper. On the other hand, it is also important to note that a shallower layer depth is not always advantageous. Depending on the combinations of kernel size \(K\) and the number of channels \(C\), the optimal setting of layer depth can be different.
**Number of channels** (**C**): We found a positive correlation between the number of channels \(C\) and performance in both Physionet 2021 and Alibaba datasets. In the majority of cases, as we varied kernel size \(K\) and layer depth \(D\), the performance improved when the number of channels \(C\) increased. In fact, for both Physionet and Alibaba datasets, all the best performances were observed in models with the largest number of channels irrespective to combinations of kernel size \(K\) and layer depth \(D\). This result is consistent with the established knowledge in the computer vision domain, indicating that wider networks are particularly well-suited for ECG classification tasks.
**Kernel size** (**K**): The panels on the left side of Figure 2 show a more reddish hue than those on the right side in both Physionet 2021 and Alibaba datasets. This indicates that performance tends to decrease as the kernel size \(K\) increases. Although recent studies in the computer vision domain have reported a positive impact of larger kernels on model performance [Ding et al. (2022)], our experiments for ECG classification show a contrasting trend, with larger kernels leading to reduced performance.
In Physionet 2021 dataset, the best combination for ECG classification performance is a layer depth \(D\) of 4, a number of channels \(C\) of 128, and a kernel size \(K\) of 3. In contrast, a combination with a layer depth \(D\) of 8, a number of channels \(C\) of 16, and a kernel size \(K\) of 15 results in the worst performance. Although the size of dataset and label distribution in Alibaba differ from those of Physionet, the best and worst combination in Alibaba dataset were remarkably similar to that of Physionet 2021 one. The optimal configuration for Alibaba dataset and Physionet 2021 are identical. The worst performance model is generated when a layer depth \(D\) of 4, a number of channels \(C\) of 16, and a kernel size \(K\) of 15. Overall, our findings provide the following scaling parameter setting guide
\begin{table}
\begin{tabular}{l|l} \hline \hline Hyperparameter & Search Space \\ \hline Learning rate & \([10^{-4},10^{-2}]\) \\ Weight decay & \([10^{-6},10^{-4}]\) \\ Dropout rate & \{0, 0.05, 0.1, 0.15, 0.2, 0.25, 0.3\} \\ Number of augmentation & \{0, 1, 2\} \\ Magnitude of augmentation & \{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\} \\ Beta distribution of Mixup & \{0, 0.1, 0.2\} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameter Search Space
to improve performance in ECG classification tasks, regardless of datasets: a shallower network, a larger number of channels, and a smaller kernel size.
### Infusing our insight into hyperparameter optimization
In practice, hyperparameter optimization is essential for achieving satisfactory performance in a given task (Jaderberg et al. (2017)). The efficiency of hyperparameter optimization is heavily influenced by the predefined search space. A vast search space might not guarantee the best performance and could lead to increased time and resource consumption. Thus, it is essential to establish a well-designed range for the search space, which ensures efficient hyperparameter optimization and ultimately results in improved model performance.
Incorporating insights from our experiments can help researchers increase their chances of achieving optimal performance. To demonstrate this, we conducted additional experiments, modifying the range of scaling hyperparameters, specifically layer depth \(D\), the number of channels \(C\), and kernel size \(K\). Table 3 shows the best performance depending on the size of search spaces, defining three distinct ranges: Large, Medium, and Optimal. The _Large_ range represents a wide hyperparameter search space, while the _Medium_ range is half of the _Large_ range, reflecting our findings of a shallower network, smaller kernel size,
Figure 2: The performance of residual networks with different layer depth (\(D\)), number of channels (\(C\)), and kernel size (\(K\)) for the ECG classification. The first row is the result of Physionet 2021 challenge dataset (Reyna et al. (2021)) and the second row is the result from Alibaba Tianchi competition dataset (Tianchi (2020)). The scores in boxes are the average of F-1 scores on the multi-label classification. The more red, the higher performance, while the more blue, the lower performance.
and larger number of channels. The _Optimal_ range includes settings that yield the best performance, as shown in Section 4. All other hyperparameters are configured as mentioned in Section 3.3.
For the experiments, we first sampled 50 combinations in a search space. Then, we selected the best model by training and evaluating them. As a result, for both Physionet 2021 and Alibaba datasets, the performance improvement are observed by reducing the space to 1/8 (_Medium_) and 1/64 (_Optimal_) compared with _Large_, respectively. Larger search spaces make it more challenging to find the optimal hyperparameters within the given combinations, potentially leading to suboptimal model performance. These results demonstrate that models acquired through more precise hyperparameters outperform those obtained using a wider search space.
## 5 Discussion
### Sub-group Analysis
As described in Table A.2 and Table A.3, both Physionet 2021 and Alibaba datasets present highly imbalances in their distribution. To determine if our experimental findings remain consistent across the study population, we conducted a sub-group analysis. The Physionet 2021 dataset is composted of various classes and databases, where each of them has its own distict distribution. For example, the ratio between the NSR and Brady classes is approximately 100:1, and 65% of NSR labels are from the Ningbo database, while 67% of SB labels are from the G12EC database. Additionally, the distribution of labels across the database sources is uneven, with no database containing all 26 classes, and some databases contain fewer classes than others. Similartly, the Alibaba dataset also exhibits substantial imbalances. Althought it is collected from a single source, the skewness of class distribution is comparable to the multi-sourced Physionet 2021 dataset. For example, the prevalence of NSR approaches nearly 50%, while the prevalence of NSTAb is only 0.03%.
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \hline Physionet & Depth (\(D\)) & Channel (\(C\)) & Kernel (\(K\)) & \# Avail. comb. & F1 \\ \hline Large & \{2,4,6,8\} & \{16,32,64,128\} & \{3,5,9,16\} & 64 & 0.612 \\ Medium & \{2,4\} & \{64,128\} & \{3,5\} & 8 & 0.644 \\ Optimal & \{4\} & \{128\} & \{3\} & 1 & 0.669 \\ \hline \hline Alibaba & Depth (\(D\)) & Channel (\(C\)) & Kernel (\(K\)) & \# Avail. comb. & F1 \\ \hline Large & \{2,4,6,8\} & \{16,32,64,128\} & \{3,5,9,16\} & 64 & 0.000 \\ Medium & \{2,4\} & \{64,128\} & \{3,5\} & 8 & 0.000 \\ Optimal & \{4\} & \{128\} & \{3\} & 1 & 0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance depending on the search space of hyperparameter optimization. _# Avail. comb._ denotes the number of available combinations. _F1_ represents the best F1 score in the sampled 50 combinations. The selected hyperparameters for each experiment are provided in Appendix D.
#### 5.1.1 Performance Depending on Labels
Figure 3 and Figure 4 demonstrate the performance of the model across label categories in Physionet 2021 and Alibaba datasets. In the case of Physionet 2021 dataset, the performance changes in each category are similar to the performance changes in entire classes. The optimal combination (\(D4\)-\(C128\)-\(K3\)) derived from the experiment in Section 4 may not belong to the best performance, but it remains among the top-performing models across all categories. This consistency suggests that the proposed scaling method is effective across all categories.
However, in the Alibaba dataset, while the overall trend of performance regarding to depth, width, and kernel size remains consistent across all categories, the degree of similarity is substantially lower than in the Physionet 2021 dataset. Specifically, there are numerous outliers that contradict the general trend, such as \(C4\), \(D2\), \(K5\) and \(C64\), \(D4\), \(K9\). One possible explanation for this could be attributed to the number of labels. Unlike the Physionet 2021 dataset, the prevalence of labels within the Alibaba dataset is relatively low; only three out of the seventeen classes exhibit a prevalence of 10% or higher. This significant class imbalance could potentially lead to unstable results. Appendix B provides a more comprehensive analysis of the model performance on specific labels. All performances depending on each label among all available combinations are described in detail in Appendix E.
#### 5.1.2 Performance Depending on Database Sources
Figure 5 shows the model performance according to the database sources in Physionet 2021 dataset. It appears that results in the Ningbo, PTBXL, G12EC, and Shaoxing databases are consistent with the entire dataset. On the other hand, we observe the different performance trend in CPSC database. One possible reason is that the CPSC database has some rare labels such as NSIVCB, PR, SA, Brady, TAb, and TInv, where a small number of samples for these classes can significantly impact the F1 scores since we used the macro-averaged F1 score as a metric.
### Correlation between Scaling Parameters and Performance
Previous research in the computer vision domain has shown that there is a positive correlation between model performance and all scaling parameters (Peng et al. (2017); Tan and Le (2019); Ding et al. (2022)). Specifically, deeper residual layers, larger kernel sizes, and many channels can contribute to better performance.
However, our findings from ECG analysis provide a different result on the correlation between model performance and scaling parameters compared to the computer vision domain. Table 4 shows the correlation between neural networks with different scales and their performance in both ECG analysis and computer vision. Contrary to the expectations, we observed a negative correlation between performance and both layer depth \(D\) and kernel size \(K\). Only the number of channels \(C\) showed a positive correlation with performance, consistent with the observatoins in the computer vision.
We conjecture that observed discrepancy is attributed to the periodic nature of ECG signals and their effect on the receptive field and global average pooling(GAP). As illustrated in Figure 1, the patterns in each ECG cycle show consistent and repeat over the time.
Figure 3: Classification performance of networks with different layer depth (\(D\)), the number of channels (\(C\)), and kernel size (\(K\)) on six label categories in Physionet2021 dataset.
Figure 4: Classification performance of networks with different layer depth (\(D\)), the number of channels (\(C\)), and kernel size (\(K\)) on six label categories in Alibaba dataset.
Figure 5: Classification performance of networks with different layer depth (\(D\)), the number of channels (\(C\)), and kernel size (\(K\)) depending on database sources.
Because of this periodicity, a receptive field that spans only a few second of ECG (i.e. one or two cycle of ECG) is sufficient for extracting the essential information from the signal. Thus, in contrast to computer vision, where the large receptive field is preferred for capturing global feature, the advantage of large receptive field is relatively insignificant in ECG classification. Further, the GAP following small receptive field functions as an ensemble of different view of signal, providing generalized and robust representation. On the other hand, a large receptive field that covers the entire signal does not benefit as much from GAP as a small receptive field. (See Appendix F for further discussion).
To investigate whether the influence of periodicity persists when it is removed, we cropped ECG signals into 2-second and 1-second segments to eliminate the effect of varying heart rates in Physionet 2021 dataset. This resulted in approximately two cycles present in a 2-second signal and one cycle in a 1-second signal, respectively. The correlation between network scaling and performance for the cropped signals is shown in the 2s and 1s at ECG columns in Table 4, respectively. Specifically, we found that the depth of layers led to a near-complete loss of correlation, while the kernel size was less affected. In contrast, the correlation remained high for the number of channels. While our experimental results cannot fully explain our conjecture, they support the idea that the periodicity of ECG signals has some influence on network scaling decisions.
## 6 Conclusion
In recent studies, CNN-based models, particularly ResNet, have been widely used to achieve higher performance in ECG classifications. However, the optimization of network scaling, which is a vital process for creating efficient and precise models, has often been overlooked. To address this, we examined the effects of different combinations of primary scaling parameters, such as network depth, convolution kernel size, and the number of channels, on the Physionet2021 dataset and Alibaba dataset.
Our results show that performance enhancement can be achieved by employing models with shallower networks, a greater number of channels, and smaller convolution kernel sizes. This finding is crucial because optimizing network scaling can be both time-consuming and computationally demanding. By refining the search space for scaling parameters based on our results, researchers can more effectively reach optimal outcomes within a given time.
Moreover, we investigated the performance variations by network scaling in relation to the dataset's labels and inferred the rationale behind our findings. This research provides
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline & Computer & \multicolumn{3}{c}{ECG} \\ & Vision & 10s & 2s & 1s \\ \hline Layer depth (D) & Positive (+) & -0.83 & -0.62 & -0.2 \\ Kernel size (K) & Positive (+) & -0.92 & -0.84 & -0.68 \\ Number of channels (C) & Positive (+) & 0.93 & 0.92 & 0.87 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Correlation between the neural network’s scaling parameters and its performance based on ECG length.
a better understanding of network scaling optimization in ECG analysis and empowers researchers to develop more efficient and effective models for ECG classification. |
2304.09367 | Graph Neural Network-Based Anomaly Detection for River Network Systems | Water is the lifeblood of river networks, and its quality plays a crucial
role in sustaining both aquatic ecosystems and human societies. Real-time
monitoring of water quality is increasingly reliant on in-situ sensor
technology. Anomaly detection is crucial for identifying erroneous patterns in
sensor data, but can be a challenging task due to the complexity and
variability of the data, even under normal conditions. This paper presents a
solution to the challenging task of anomaly detection for river network sensor
data, which is essential for accurate and continuous monitoring. We use a graph
neural network model, the recently proposed Graph Deviation Network (GDN),
which employs graph attention-based forecasting to capture the complex
spatio-temporal relationships between sensors. We propose an alternate anomaly
scoring method, GDN+, based on the learned graph. To evaluate the model's
efficacy, we introduce new benchmarking simulation experiments with
highly-sophisticated dependency structures and subsequence anomalies of various
types. We further examine the strengths and weaknesses of this baseline
approach, GDN, in comparison to other benchmarking methods on complex
real-world river network data. Findings suggest that GDN+ outperforms the
baseline approach in high-dimensional data, while also providing improved
interpretability. We also introduce software called gnnad. | Katie Buchhorn, Edgar Santos-Fernandez, Kerrie Mengersen, Robert Salomone | 2023-04-19T01:32:32Z | http://arxiv.org/abs/2304.09367v3 | # Graph Neural Network-Based Anomaly Detection for River Network Systems
###### Abstract
Water is the lifeblood of river networks, and its quality plays a crucial role in sustaining both aquatic ecosystems and human societies. Real-time monitoring of water quality is increasingly reliant on in-situ sensor technology. Anomaly detection is crucial for identifying erroneous patterns in sensor data, but can be a challenging task due to the complexity and variability of the data, even under typical conditions. This paper presents a solution to the challenging task of anomaly detection for river network sensor data, which is essential for accurate and continuous monitoring. We use a graph neural network model, the recently proposed Graph Deviation Network (GDN), which employs graph attention-based forecasting to capture the complex spatio-temporal relationships between sensors. We propose an alternate anomaly threshold criteria for the model, GDN+, based on the learned graph. To evaluate the model's efficacy, we introduce new benchmarking simulation experiments with highly-sophisticated dependency structures and subsequence anomalies of various types. We further examine the strengths and weaknesses of this baseline approach, GDN, in comparison to other benchmarking methods on complex real-world river network data. Findings suggest that GDN+ outperforms the baseline approach in high-dimensional data, while also providing improved interpretability. We also introduce software called gnnad.
Anomaly Detection, Graph Deviation Network, Graph Neural Network, Multivariate Time Series, Graph Attention Forecasting, Spatio-temporal Data, Complex Systems
## Introduction
River network systems play a vital role as freshwater habitats for aquatic life, and as support for terrestrial ecosystems in riparian zones, but are particularly sensitive to the anthropogenic impacts of climate change, water pollution and over-exploitation, among other factors. As a United Nations Sustainable Development Goal [1], water quality is a major environmental concern worldwide. The use of _in-situ_1 sensors for data collection on river networks is increasingly prevalent [2, 3], generating large amounts of data that allow for the identification of fine-scale spatial and temporal patterns, trends, and extremes, as well as potential sources of pollutants and their downstream impacts. However, such sensors are susceptible to technical errors relating to the equipment, herein defined as _anomalies_, for example due to miscalibration, biofouling, electrical interference and battery failure. In contrast, extreme _events_ in rivers occur as result of heavy rain and floods. Technical anomalies must be identified before the data are considered for further analysis, as they can introduce bias in model parameters and affect the validity of statistical inferences, confounding the identification of true changes in water variables. Trustworthy data is needed to produce reliable and accurate assessments of water quality, for enhanced environmental monitoring, and for guiding management decisions in the prioritisation of ecosystem health.
Footnote 1: in-situ refers to an instrument in direct contact with the medium of observation.
Anomaly detection in river networks is challenging due to the highly dynamic nature of river water even under typical conditions [4], as well as the complex spatial relationships between sensors. The unique spatial relationships between neighbouring sensors on a river network are characterised by a branching network topology with flow direction and connectivity, embedded within the 3-D terrestrial landscape. Common anomalies from data obtained from in-situ sensors are generally characterised by multiple consecutive observations (_subsequence_ or persistent, [5]), including sensor drift and periods of unusually high or low variability, which may indicate the necessity for sensor maintenance or calibration [6]. Such anomalies are difficult to detect and often associated with high false negative rates [7].
Earlier statistical studies have focused on developing autocovariance models based on within-river relationships to capture the unique spatial characteristics of rivers [8]. Although these methods are adaptable to different climate zones, and have recently been extended to take temporal dependencies into account [9], data sets generated by in-situ sensors still pose significant computational challenges with such prediction methods due to the sheer volume of data [10]. Autocovariance matrices must be inverted when fitting spatio-temporal models and making predictions, and the distances between sites must be known. Previous work [11], aimed to detect drift and high-variability anomalies in water quality variables, by studying a range of neural networks calibrated using a Bayesian multi-objective optimisation procedure. However, the study was limited to analyzing univariate time series data, and the supervised methods required a significant amount of labeled data for training, which are not always available.
There are limited unsupervised anomaly detection methods for subsequence anomalies (of variable length), and even less so for multivariate time series anomaly detection [5]. One such method uses dynamic clustering on learned segmented windows to identify global and local anomalies [12]. However, this algorithm requires the time series to be well-aligned, and is not suitable for the lagged temporal relationships observed with river flow. Another method to detect variable-length subsequence anomalies in multivariate time series data uses dimensionality reduction to construct a one-dimensional feature, to represent the density of a local region in the recurrence representation, indicating the recurrence of patterns obtained by a sliding window [13]. A similarity measure is used to classify subsequences as either non-anomalous or anomalous. Only two summary statistics were used in this similarity measure and the results were limited to a low-dimensional simulation study. In contrast, the technique introduced by [14], DeepAnT, uses a deep convolutional neural network (CNN) to predict one step ahead. This approach uses Euclidean distance of the forecast errors as the anomaly score. However, an anomaly threshold must be provided.
Despite the above initial advances, challenges still remain in detecting persistent variable-length anomalies within high-dimensional data exhibiting noisy and complex spatial and temporal dependencies. With the aim of addressing such challenges, we explore the application of the recently-proposed Graph Deviation Network (GDN) [15], and explore refinements with respect to anomaly scoring that address the needs of environmental monitoring. The GDN approach [15] is a state-of-the-art model that uses sensor embeddings to capture inter-sensor relationships as a learned graph, and employs graph attention-based forecasting to predict future sensor behaviour. Anomalies are flagged when the error scores are above a calculated threshold value. By learning the interdependencies among variables and predicting based on the typical patterns of the system in a semi-supervised manner, this approach is able to detect deviations when the expected spatial dependencies are disrupted. As such, GDN offers the ability to detect even the small-deviation anomalies generally overlooked by other distance based and density based anomaly detection methods for time series [16, 17], while offering robustness to lagged variable relationships. Unlike the commonly-used statistical methods that explicitly model covariance as a function of distance, GDN is flexible in capturing complex variable relationships independent of distance. GDN is also a semi-supervised approach, eliminating the need to label large amounts of data, and offers a computationally efficient solution to handle the ever increasing supply of sensor data. Despite the existing suite of methods developed for anomaly detection, only a limited number of corresponding software packages are available to practitioners. In summary, we have identified the following gaps in the current literature
and research on this topic:
1. An urgent need exists for a flexible approach that can effectively capture complex spatial relationships in river networks without the specification of an autocovariance model, and the ability to learn from limited labeled data, in a computationally efficient manner.
2. Lack of data and anomaly generation schemes on which to benchmark methods, that exhibit complex spatial and temporal dependencies, as observed across river networks.
3. Lack of open-source software for anomaly detection, which hinders the accessibility and reproducibility of research in this field, and limits the ability for individuals and organisations to implement effective anomaly detection strategies.
Our work makes four primary contributions to the field:
1. An improvement of the GDN approach via the threshold calculation based on the learned graph is presented, and shown to detect anomalies more accurately than GDN while improving the ability to locate anomalies across a network.
2. Methods for simulating new benchmark data with highly-sophisticated spatio-temporal structures are provided, contaminated with various types of persistent anomalies.
3. Numerical studies are conducted, featuring a suite of benchmarking data sets, as well as real-world river network data, to explore the strengths and limitations of GDN (and its variants) in increasingly challenging settings.
4. User-friendly, free open-source software for the GDN/GDN+ approach is made available on the pip repository as gnnad, with data and anomaly generation modules, as well as the publication of a novel real-world data set.
The structure of the remainder of the paper is as follows: The next section details the methods of GDN and the model extension GDN+, and describes the methodology of the simulated data and anomaly generation. In the Results section we present an extensive simulation study on the benchmarking data, as well as a real-world case study. The performance of GDN/GDN+ is assessed against other state-of-the-art anomaly detection models. Further details and example code for the newly-released software are also provided. The paper concludes with a discussion of the findings, and the strengths and weaknesses of the considered models.
## Methods
Consider multivariate time series data \(\mathrm{Y}=\left[\mathbf{y}^{(1)},\dots,\mathbf{y}^{(T)}\right]\), obtained from \(n\) sensors over \(T\) time ticks. The (univariate) data collected from sensor \(i=1,\dots,n\) at time \(t=1,\dots,T\) are denoted as \(\mathbf{y}_{i}^{(t)}\). Following the standard semi-supervised anomaly detection approach [18, 19], non-anomalous data are used for training, while the test set may contain anomalous data. That is, we aim to learn the sensor behaviour using data obtained under standard operational conditions throughout the training phase and identify anomalous sensor readings during testing, as those which deviate substantially from the learned behaviour. As the algorithm output, each test point \(\mathbf{y}^{(t)}\) is assigned a binary label, \(a(t)\in\{0,1\}\), where \(a(t)=1\) indicates an anomaly at time \(t\), anywhere across the full sensor network.
The GDN approach [15] for anomaly detection is composed of two aspects:
1. **Forecasting-based time series model:** a non-linear _autoregressive_ multivariate time series model that involves graph neural networks is trained, and
2. **Threshold-based anomaly detection:** transformations of the individual forecasting errors are used to determine if an anomaly has occurred, if such errors exceed a calculated threshold.
The above components are described in more detail below.
**Forecasting-based Time Series Model.** To predict \(\mathbf{y}^{(t)}\), the model takes as input \(w\in\mathbb{N}\) lags of the multivariate series,
\[\mathbf{X}^{(t)}:=\left[\mathbf{y}^{(t-1)},\dots,\mathbf{y}^{(t-w)}\right].\]
The \(i\)-th row (containing sensor \(i\)'s measurements for the previous \(w\) lags) of the above input matrix is represented by the column vector, \(\mathbf{x}_{i}^{(t)}=(\mathbf{y}_{i}^{(t-1)},\dots,\mathbf{y}_{i}^{(t-w)})\). Prior to training, the practitioner specifies acceptable _candidate relationships_ via the sets \(\mathscr{C}_{1},\dots,\mathscr{C}_{n}\), where each \(\mathscr{C}_{i}\subseteq\{1,2,\dots,n\}\) and does not contain \(i\). These sets specify which nodes are allowed to be considered to be connected _from_ node \(i\) (noting that the adjacency graph connections are not necessarily symmetric).
The model implicitly learns a graph structure via training _sensor embedding_ parameters \(\mathbf{v}_{i}\in\mathbb{R}^{d}\) for \(i=1,\dots,n\) which are used to construct a graph. The intuition is that the embedding vectors capture the inherent characteristics of each sensor, and that sensors which are "similar" in terms of the angle between their vector embeddings are considered connected. Formally, the quantity \(e_{ji}\) is defined as the cosine similarity between the vector embeddings of sensors \(i\) and \(j\):
\[e_{ji}=\frac{\mathbf{v}_{i}^{\top}\mathbf{v}_{j}}{\left\|\mathbf{v}_{i}\right\|\left\| \left\|\mathbf{v}_{j}\right\|\right\|}\mathbb{I}\{j\in\mathscr{C}_{i}\},\quad i, j\in\{1,\dots,n\},\]
with \(\left\|\cdot\right\|\) denoting the Euclidean norm, and indicator function \(\mathbb{I}\) which equals \(1\) when node \(j\) belongs to set \(\mathscr{C}_{i}\), and \(0\) otherwise. Note that the similarity is forced to be zero if a connecting node is not in the permissible candidate set. Next, let \(e_{j,(i)}\) be the \(i\)-th largest value in \((e_{j1},\dots,e_{jn})\). A _graph-adjacency matrix_ (and in turn a graph itself) is then constructed from the sensor similarities via:
\[A_{ji}=\mathbb{I}\{\{e_{ji}\geq e_{j,(K)}\}\cup\{i=j\}\},\]
for user-specified \(K\in\{1,\ldots,n\}\) which determines the maximum number of edges from a node, referred to as the "Top-K" hyperparameter.
The above describes how the trainable parameters \(\{\mathbf{v}_{i}\}_{i=1}^{n}\) yield a graph. Next, the lagged series are fed individually through a shallow _Graph Attention Network_[20] that uses the previously constructed graph. Here, each _node_ corresponds to a sensor, and the _node features_ for node \(i\) are the lagged (univariate) time-series values, \(\mathbf{x}_{i}^{(t)}\in\mathbb{R}^{w}\). Allow a parameter weight matrix \(\mathbb{W}\in\mathbb{R}^{d\times w}\) to apply a shared linear transform to each node. Then, the output of the network is given by
\[\mathbf{z}_{i}^{(t)}=\max\left\{\mathbf{0},\left(\sum_{j:\mathbf{x}_{ij}>0}\alpha_{ij} \mathbb{W}\mathbf{x}_{i}^{(t)}\right)\right\},\]
where \(\mathbf{z}_{i}^{(t)}\) is called the _node representation_, and coefficients \(\alpha_{ij}\) are the attention paid to node \(j\) when computing the representation for node \(i\), with:
\[\pi_{ij}=\text{LeakyReLU}\left(\mathbf{a}^{\top}\left(\mathbf{v}_{i} \oplus\mathbb{W}\mathbf{x}_{i}^{(t)}+\mathbf{v}_{j}\oplus\mathbb{W}\mathbf{x}_{j}^{(t)} \right)\right), \tag{1}\] \[\text{where }\alpha_{ij}=\frac{\exp(\pi_{ij})}{\sum_{k:\mathbf{x}_{ ij}>0}\exp(\pi_{ik})},\]
with learnable parameters \(\mathbf{a}\in\mathbb{R}^{2d}\), where \(\oplus\) denotes concatenation, and \(\text{LeakyReLU}(\mathbf{x}):=\max\{\delta\mathbf{x},\mathbf{x}\}\) for \(\delta>0\), with the maximum operation applied elementwise. Note the addition2 in Equation 1 and that \(\sum_{j=1}^{n}\alpha_{ij}=1\). Intuitively, the above is an automated mechanism to aggregate information from a node itself and neighbouring nodes (whilst simultaneously assigning a weight of how much information to take from each neighbour) to compute a vector representing extracted information about node \(i\) itself and its neighbours' interaction with it. The final model output (prediction) is given by,
Footnote 2: It seems that the authors of [15] mistakenly denote this addition as concatenation in the paper, however, their corresponding reference code computes addition.
\[\tilde{\mathbf{y}}^{(t)}=f_{\mathbf{\eta}}\left(\left[\mathbf{v}_{1}\oplus\mathbf{\pi}_{1}^{( t)},\ldots,\mathbf{v}_{n}\oplus\mathbf{\pi}_{n}^{(t)}\right]\right),\]
where \(f_{\mathbf{\eta}}:\mathbb{R}^{d\times n}\rightarrow\mathbb{R}^{n}\) is a feedforward neural network with parameters \(\mathbf{\eta}\), and \(0\) denotes element-wise multiplication. The model is trained by optimizing the parameters \(\{\mathbf{v}_{i}\}_{i=1}^{n}\), \(W\), \(\mathbf{a}\), and \(\mathbf{\eta}\) to minimize the mean squared error loss function
\[\mathcal{L}=\frac{1}{T-w}\sum_{t=w+1}^{T}\left|\left|\tilde{\mathbf{y}}^{(t)}-\bm {y}^{(t)}\right|\right|^{2}.\]
**Threshold-based Anomaly Detection.** Given the learned inter-sensor and temporal relationships, we are able to detect anomalies as those which deviate from these interdependencies. An _anomalousness score_ is computed for each time point in the test data. For each sensor \(i\), we denote the prediction error at time \(t\) as,
\[\epsilon_{i,t}=|y_{i}^{(t)}-\tilde{y}_{i}^{(t)}|,\]
with \(|\cdot|\) denoting the absolute value, and the vector of prediction error for each sensor is, \(\mathbf{\epsilon}_{i}\in\mathbb{R}^{T-w}\). Since the error values of different sensors may vary substantially, we perform a robust normalisation of each sensor's errors to prevent any one sensor from overly dominating the others, that is,
\[\tilde{\epsilon}_{i}=\left(\frac{\mathbf{\epsilon}_{i}-\text{Median}(\mathbf{ \epsilon}_{i})}{\text{IQR}(\mathbf{\epsilon}_{i})}\right),\]
where IQR denotes _inter-quartile range_. In the original work by [15], a time point \(t\) is flagged as anomalous if,
\[A(t)=\mathbb{I}\{\max_{i}(\tilde{\epsilon}_{i,t})>\kappa\},\]
using the notation \(\tilde{\epsilon}_{i,t}\) for the error value at the \(t\)-th index. Alternatively, the authors recommend using a _simple moving average_ of \(\tilde{\epsilon}_{i}\) and flagging time \(t\) as anomalous if the maximum of that moving average exceeds \(\kappa\). The authors specify \(\kappa\) as the maximum of the normalised errors observed on some (non-anomalous) validation data, denoted by variant epsilon, \(\tilde{\mathbf{\epsilon}}\). However, this is applied to all sensors as a fixed threshold.
### Sensor-based Anomaly Threshold: GDN+
The behavior of water quality variables may differ across space, for example, water level at high-altitude river network branches are generally characterised by rainfall patterns, whereas water level downstream near a river outlet can also be influenced by tidal patterns. A fixed threshold across the network does not allow for any local adaptations in error sensitivity. For this reason, this work also considers the novel sensor-specific threshold calculation,
\[A_{i}(t)=\mathbb{I}\{\tilde{\epsilon}_{i,t}>\kappa_{i}\},\]
where \(\kappa_{i}\) is chosen such that,
\[\frac{1}{|\{\tilde{\epsilon}_{j,t}\}_{j:\mathbf{x}_{ij}>0}|}\sum_{j:\mathbf{x}_{ij}>0} \mathbb{I}\{\tilde{\epsilon}_{j,t}<\kappa_{i}\}=\tau,\]
for some user-specified percentile, \(\tau\in(0,100)\), and where \(|\cdot|\) is the cardinality. In other words, the threshold for each sensor, \(\kappa_{i}\), is set as the \(\tau\)-th percentile of the normalised error scores across the neighbourhood of \(i\), on the validation data set. Unless otherwise stated, we set \(\tau=99\). In this way, the sensor threshold is based only on its direct neighbourhood, as opposed to the original method which uses the global maximum, and is thus in tune with the local behaviour of the system, and more robust as a percentile. We refer to the GDN model using this variant of the threshold-based anomaly detection as GDN+.
### New Class of Benchmarking Data
The following is a method for simulating synthetic datasets with persistent anomalies inspired by the statistical models recently used to model river network data [9]. Let \(\mathcal{S}\) denote an arbitrary set of individual spatial locations, with locations \(\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{n}\in\mathcal{S}\) chosen by experimental design [21] or otherwise. Consider a linear mixed
model with \(n\times 1\) response \(Y\), and \(n\times m\) design matrix \(X\) of explanatory variables spatially indexed at locations \(\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{n}\),
\[\mathbf{Y}_{t}=\mathbf{\beta}_{0}+\mathbf{X}_{t}\mathbf{\beta}+\mathbf{Z}+\mathbf{e}_{0},\quad t=1,\ldots,T, \tag{2}\]
with time-homogeneous spatially-correlated random effects \(\mathbf{Z}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma}_{\mathbf{Z}})\) and vector of independent noise terms \(\mathbf{e}_{0}\!\sim\!\mathcal{N}(\mathbf{0},\sigma_{0}^{2}\mathbb{I})\), yielding \(\text{Cov}[\mathbf{Y}_{t}|\mathbf{X}_{t}=\mathbf{x}_{t}]=\mathbf{\Sigma}_{\mathbf{Z}}+\sigma_{0}^{2}\mathbb{I}\), where \(\mathbb{I}\) denotes the \(n\times n\) identity matrix and \(\mathbf{e}_{0}\) is an error term. The covariates \(\mathbf{X}_{t}\) for \(t=1,\ldots,T\) are simulated according to an autoregressive process based on an underlying sequence of independent random fields,
\[\mathbf{X}_{t}=\Sigma_{i=0}^{p}\varphi_{i}\widetilde{\mathbf{X}}_{t-i}, \tag{3}\]
where \(p\) is the order of the autoregressive process, and
\[\widetilde{\mathbf{X}}_{t}\overset{\text{iid}}{\sim}\mathcal{N}(\mathbf{0},\mathbf{ \Sigma}_{\mathbf{X}}),\quad t=1,\ldots,T.\]
Note that other distributions may be used. Above,
\[(\mathbf{\Sigma}_{\mathbf{X}})_{ij}=k(\mathbf{s}_{i},\mathbf{s}_{j}^{\prime})\]
for some covariance kernel \(k\). For example,
\[k(\mathbf{s},\mathbf{s}^{\prime};\sigma,\alpha)=\sigma^{2}\exp\left(-\frac{||\mathbf{s}- \mathbf{s}^{\prime}||^{2}}{\alpha}\right), \tag{4}\]
where \(||\cdot||\) denotes the Euclidean norm, \(\sigma^{2}>0\) is the covariance-scaling parameter, and \(\alpha\in\mathbb{R}\) is the range parameter that controls the rate of decay of correlation between points over distance. Figure 1 illustrates an example of a generated Gaussian random field evolving over time.
We consider two scenarios in generating simulated data: 1) spatial relationships characterised by Euclidean distance only, where the covariance matrix, \(\mathbf{\Sigma}_{\mathbf{Z}}\), is constructed via the kernel function given in Equation 4, and 2) data simulated on a river network, where \(\mathbf{\Sigma}_{\mathbf{Z}}\) is constructed via the kernel function given in Equation 5. Together, this approach provides a variety of simulated data with highly-sophisticated dependency structure, see Figure 2.
Two points \(\mathbf{s}_{i}\), \(\mathbf{s}_{j}\) on a river network are said to be _flow-connected_ if they share water flow, and _flow-unconnected_ otherwise. We define stream distance, \(h_{\text{div}}(\mathbf{s}_{i},\mathbf{s}_{j})\), as the shortest distance separating \(\mathbf{s}_{i}\) and \(\mathbf{s}_{j}\) when travelling _along_ a given river network. Tail-up covariance models for river networks, introduced in [8], effectively represent spatial relationships when variables are dominated by flow (e.g. pollutants enter a stream and only impact downstream locations). By construction, the tail-up covariance function only allows for correlation between flow-connected sites:
\[k_{\text{TV}}(\mathbf{s}_{i},\mathbf{s}_{j};\sigma,\alpha)=\omega_{ij}\sigma^{2}\exp \left(-\frac{h_{\text{div}}(\mathbf{s}_{i},\mathbf{s}_{j})}{\alpha}\right)\mathcal{F} _{ij}, \tag{5}\]
where \(\mathcal{F}_{ij}\) is equal to one if \(s_{i}\) and \(s_{j}\) are flow-connected, and zero otherwise, and \(\omega_{ij}\) is a weighting attributed to each stream segment to account for the upstream branching network structure and ensure stationarity in variance (for full details, see [22, 23]). The weightings corresponding to each segment may incorporate flow volume, or the area of the catchment, or a proxy such as stream order [24]. Note that there are various choices of covariance models. Tail-down models allow correlation between both flow-connected and flow-unconnected locations, and may be more suitable for water variables such as temperature, or organisms that can move both upstream and downstream [25]. Here we use the exponential function for decay, for further covariance model examples see [8].
Once the base multivariate time series is constructed as above, it is modified to include persistent anomalies as follows. Hyperparameters are, \(n_{\text{anomaly}}\geq 0\), the number of subsequence anomalies and, \(\lambda_{\text{anomaly}}>0\), the average length of an anomalous subsequence, for each anomaly type. In this example, we consider two types of anomalies, high-variability and drift, see Algorithm 1.
```
input : Time series data \(\text{Y}=\left[\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(T)}\right]\), number of locations \(n\), expected length of each anomaly type \(\lambda_{\text{drift}}\) and \(\lambda_{\text{var}}\), number of anomalies, \(n_{\text{drift}}\) and \(n_{\text{var}}\), variability anomaly scale \(\zeta\), and drift anomaly parameter \(\delta\). for anomaly \(\in\{\text{drift},\text{var}\}\)do for\(i=1,\ldots,n_{\text{anomaly}}\)do Draw location \(S\sim\text{Uniform}(\{1,2,\ldots,n\})\) Draw time \(t\sim\text{Uniform}(\{1,\ldots,T\})\) Draw length \(L\sim\text{Poisson}(\lambda_{\text{anomaly}})\) ifanomaly = driftthen \(\mathbf{v}\leftarrow(\delta,2\delta,\ldots,L\delta)\) // drift else \(\mathbf{v}\sim\mathcal{N}(\mathbf{0},\zeta^{2}\mathbb{I}_{L\times L})\) // variability \(\mathbf{y}_{S,t:(t+1)}\leftarrow\mathbf{y}_{S,t:(t+L)}+\mathbf{v}\)
```
**Algorithm 1**Two-type Anomaly Generation
Figure 1: Smooth random Gaussian field used to generate covariate values, \(X\), in the simulation studies. Examples of the field are shown for \(t=1,2,3\). Sensor locations are shown as white dots.
### Python Package: Graph-Based Neural Network Anomaly Detection (gnnad)
The Python package gnnad introduced in this paper extends and generalises the research code originally implemented by [15], which is incompatible with newer package dependencies and offers only a command line interface. The code is refactored to be modular and user-friendly, with a scikit-inspired interface, and extended to include visualisation, data and anomaly generation modules, as well as the GDN+ model extension. A continuous integration/continuous deployment (CI/CP) pipeline is established with unit testing to ensure that changes to the code are tested and deployed efficiently. Comprehensive documentation now accompanying the codebase enhances readability for future developers, facilitating maintenance, reuse, and modification. Furthermore, rigorous error handling is implemented to improve the software experience. The software developments have resulted in a more robust, user-friendly and easily distributable package that is available via [https://github.com/KatieBuc/gnnad](https://github.com/KatieBuc/gnnad) and the pip repository, gnnad. See below for example code that shows the process of fitting the GDN+ model.
```
fromsklearn.model_selectionimport...train_test_split fromgnad.genranomalyimportGNAND fromgnad.generateimportGenerateGaussian,...GenerateAnomaly
#generatesampledata gengauss=GenerateGaussian(T=4000,seed=435,...n_obs=20) X=gengauss.generate()
#splittraintest X_train,X_test=train_test_split(X)
#generateanomaliesontest amms=GenerateAnomaly(X_test) X_test=amoms.generate(amoms.variability,...lam=3,prop_anom=0.07,seed=45) X_test=amoms.generate(amoms.drift,...lam=11,prop_anom=0.07,seed=234) y_test=amoms.get_labels()
#instantiateandfitGNNmodelobject model=GNAND(threshold_type="max_validation",...topk=,slide_win=200) fitted_model=model.fit(X_train,X_test,...y_test)
#sensorbasedthresholdbasedonGNN+ pred_label=fitted_model.sensor_threshold_...preds(tau=99)
#printevaluationmetrics fitted_model.print_eval_metrics(pred_label)
```
## Results
This section presents a summary of the main findings for anomaly detection using GDN/GDN+ on both simulated and real-world data. To ensure quality of data, the aim for practitioners is to maximise the ability to identify anomalies of different types, while minimising false detection rates. We define the following metrics in terms of true positive (TP), true negative (TN), false positive (FP) and false negative (FN) classifications. In other words, the main priority is to minimise FN, while maintaining a reasonable number of FP such that it is not an operational burden to check the total number of positive flags. Accordingly, we use recall, defined by \(\frac{\text{TP}}{\text{TP+FN}}\), to evaluate the performance on the test set and to select hyperparameters. That is, the proportion of actual positive cases that were correctly identified by the models().We also report the performance using precision (\(\frac{\text{TP}}{\text{TP+FP}}\)), accuracy (\(\frac{\text{TP+TN}}{\text{TP+TN+FP+FN}}\)) and specificity (\(\frac{\text{TN}}{\text{TN+FP}}\)).
To evaluate model performance, three existing anomaly detection models are used as benchmarks: 1. The naive (random walk) Autoregressive Integrated Moving Average Model (ARIMA) prediction model from [26, 7]; 2. HDoutliers [17], an unsupervised algorithm designed to identify anomalies in high-dimensional data, based on a distributional model that allows for probability assignment to an anomaly; and 3. DeepAnT [14], an unsupervised, deep learning-based approach to detecting anomalies in time series data.
\begin{table}
\begin{tabular}{l r r r} \hline Dataset & \#Train & \#Test & \%Anomalies \\ \hline _SimEuc_ & \(3,000\) & \(1,000\) & \(13.2\) \\ _SimRiver_ & \(3,000\) & \(1,000\) & \(13.2\) \\ Herbert & \(12,745\) & \(3,499\) & \(\mathbf{58.0}\) \\ \hline \end{tabular}
\end{table}
Table 1: **Details of the three data sets used in the case studies.**
Figure 2: _SimEuc_ site locations across space (left) and _SimRiver_ site locations along a river network (right). Both simulations use the same (x, y) coordinates for sensor locations. The direction of flow is from top to bottom for SimRiver. Red dots indicate sites for which time series have been shown in Figure 3 and Figure 4.
### Simulation Study: Benchmark Data
Data are generated using the linear-mixed model described in Equation 2, with differing spatial dynamics: _SimEuc_ where the random effect, \(\mathbf{Z}\), is characterised by Euclidean distance only, and _SimRiver_ where \(\mathbf{Z}\) simulates complex river network dynamics [27], using the same site locations and covariate values, \(\mathbf{X}\). Detecting anomalies that involve multiple consecutive observations is a difficult task that often requires user intervention, and is the focus of this study. We consider scenarios with drift and high-variability anomaly types, which together contaminate 13.2% of the test data, given \(n_{\text{drift}}=5,\lambda_{\text{drift}}=11,n_{\text{var}}=24,\lambda_{\text {var}}=3\), see Table 1.
Figure 3 visualises aspects of the _SimEuc_ dataset, where sensor 37 and sensor 8 are in close (Euclidean) proximity, resulting in a high correlation between the locations (0.65), as anticipated. Note that sensor 23 exhibits anomalous behavior, high-variability and drift, consecutively, over time. Compared to the _SimRiver_ dataset, shown in Figure 4, we note how the time series from sensor 37 and sensor 8 are no longer strongly correlated (0.07), despite their close proximity, as they are not flow-connected in the simulated river network.
Table 2 shows the performance of the different anomaly detection methods. The best performance in terms of recall is highlighted in bold, while the second-best performance is underlined. We observe that GDN/GDN+ outperforms most other models in terms of recall, which is the fraction of true positives among all actual positive instances. Specifically, GDN has a recall score of 83.3% on _SimEuc_ and 72.7% on SimRiv, while GDN+ has the second highest recall score of 85.6% on _SimEuc_ and 78.0% on SimRiv. Although ARIMA performed best in terms of recall, the high percentage of detected anomalies, 81.6% and 85.6%, is impractical to use (discussed below) and results in low accuracy scores of 28.6% and 25.3%, respectively. HDoutliers did not flag any time point as anomalous in any test case. DeepAnT tends to classify most samples as negative, resulting in a low recall score. These results suggest that GDN/GDN+ are best able to detect a high proportion of actual anomalies in the datasets.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Data & Model & Rec & Prec & Acc & Spec \\ \hline SimEuc & HDoutliers & **0.0** & **0.0** & 86.8 & 100.0 \\ & ARIMA & **88.6** & 14.4 & 28.6 & 19.4 \\ & DeepAnT & 3.8 & 11.9 & 83.5 & 95.7 \\ & GDN & 83.3 & 55.3 & 88.9 & 89.7 \\ & GDN+ & 85.6 & 48.1 & 85.9 & 85.9 \\ \hline SimRiv & HDoutliers & **0.0** & **0.0** & 86.8 & 100.0 \\ & ARIMA & **91.7** & 14.2 & **25.3** & **15.1** \\ & DeepAnT & **0.8** & **9.1** & 85.9 & 98.8 \\ & GDN & 72.7 & 54.2 & 88.3 & 90.6 \\ & GDN+ & 78.0 & 43.1 & 83.5 & 84.3 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Anomaly detection performance in terms of recall (%), precision (%), accuracy (%), and specificity (%) of GDN and its variants and baseline methods for the simulation study.
Figure 4: A selection of series from the _SimRiver_ data set; sensor 8 and sensor 37 are not flow-connected sites, and share low correlation (Pearson’s coefficient of 0.07). Drift anomalies (red dots) are shown in data from sensor 4. A Savitzky-Golay filter is used to smoothen the time series for visual representation (purple line).
Figure 3: A selection of time series from the _SimEuc_ data set; sensor 8 and sensor 37 separated by a short Euclidean distance share high correlation (Pearson’s coefficient of 0.65). A Savitzky-Golay filter smoothens the time series (purple line). On the bottom, sensor 23 illustrates high-variability and drift anomalies, consecutively (red dots).
Figure 5 shows the classifications of anomalies in the simulation study. For reasons mentioned, recall is the performance metric of interest, but we also consider the trade-off between recall and precision. Lower precision means that the model may also identify some normal instances as anomalies, leading to false positives. In the context of river network anomaly detection, FP may be manually filtered, but it is critical to minimise FN. Note that GDN+ outperforms GDN in minimising the FN count, but at the cost of increasing FP, in both data sets. Such a trade-off is acceptable and considered an improvement in this context. Conversely, while ARIMA has the highest recall score, the number of FP classifications is impractical for practitioners to deal with (\(>\)70% of the test data). We also note that drift anomalies are harder to detect than high-variability anomalies, with drift as the majority of FN counts, in all cases.
The authors of the GDN model demonstrated its efficacy in detecting anywhere-within-system failures at time \(t\) by applying a threshold to all sensors within a system. However, the use of sensor-based thresholds in GDN+ has the advantage of indicating anomalies at the individual sensor level. In the context of monitoring river networks, it is crucial to identify the anomalous sensor, \(i\), at a given time \(t\). The percentage of true positives detected at the correct sensor, \(i\), using the sensor-based anomaly threshold, \(A_{i}(t)\), in GDN+, was 92% and 89% for _SimEuc_ and SimRiver, respectively. Similarly, the rate of true positives detected in the neighbourhood of the correct sensor \(i\) were 96% and 91%, respectively. This granularity of information is essential for large networks consisting of independent sensors that are separated by significant spatial distances, where the cost of time and travel for sensor replacement or maintenance is substantial.
### Replication Study
This section explores the anomaly detection performance of GDN/GDN+ across multiple simulated data sets. The approach is as follows. First, ten new sets of spatial sampling locations are created, and for each set a Gaussian random field evolving over time is simulated, as per Equation 3. For each set of locations, we again consider both the Euclidean spatial characterisation (_SimEuc_), and the river network spatial characterisation (_SimRiver_), yielding a total of 20 benchmark data sets. In the first case, we use the Euclidean covariance model in Equation 4, parameterised by, \(\sigma^{2}\in[1,5]\), and, \(\alpha\in[5,15]\), with independent noise parameter, \(\sigma_{0}^{2}\in[0,1]\), and regression parameters \(\beta_{0}\in[1,10]\), and, \(\beta_{1}\in[1,10]\), for the linear model in Equation 2. The values of the parameters are chosen uniformly at random. The Tail-up covariance model in Equation 5 is used in the second case, parameterised as above. Then, anomalies are generated with the following parameters: drift \(\delta\in[3,6]\), variability \(\zeta\in[12,15]\), length of anomalies \(\lambda_{\text{drift}}\in[5,10]\), \(\lambda_{\text{var}}\in[2,10]\), and the number of anomalies, \(n_{\text{drift}}\in[50,100]\), \(n_{\text{var}}\in[50,100]\) (see Algorithm 1). Across all simulations, the size of the data set is fixed to have length, \(T=4000\), and number of sensors, \(n=40\).
Figure 6 illustrates the anomaly detection performance for GDN and GDN+, run on each data set with sliding window length \(w=3\), and Top-K hyperparameter \(K=5\). Note that the total number of anomalies can be seen at the bar height of TP+FN. In every scenario, GDN+ improves the FN count (red line), but at the cost of an increased TP count (orange line). Whether such a tradeoff is tol
Figure 5: Anomaly detection performance of ARIMA, GDN, and GDN+ for the simulation study, in terms of true positive (TP), true negative (TN), false positive (FP) and false negative (FN).
Figure 6: Anomaly detection performance of GDN, and GDN+ across the twenty simulated benchmarks in the replication study. Note that GDN+ consistently decreases false negatives (red line) in every case, but also increases false positives (orange line).
erable depends on how critical it is in practical scenarios that true anomalies are successfully detected. Note that performance varies from one scenario to the next. Nevertheless, despite the simulated datasets being extremely noisy and complex, GDN and GDN+ appear to succeed in successful anomaly detection when other methods cannot.
### Case Study: Herbert River
This case study examines water-level data collected from eight sites located across the Herbert river, a major river system located in the tropical region of Australia, as shown in Figure 7. The time series data is highly non-stationary, characterised by river _events_ caused by abnormal rainfall patterns, with some coastal sites exhibiting shorter periodicity trends which can be attributed to tidal patterns, see Figure 8. The spatial relationships are complex, and depend on the surrounding water catchment areas, spatial rainfall patterns, dams, and other impediments. In-situ sensors are prone to various anomalies such as battery failure, biofouling (accumulation of microorganisms, plants, algae, or small animals), and damage. In some cases, anomalies can manifest as the absence of a water event (i.e., flatlining) rather than the presence of abnormal time series patterns (i.e., spikes, variability, drift). In real-world scenarios, anomalies can persist for extended periods, and resolving them may require traveling to remote locations to inspect and repair sensors. As seen in Figure 8, anomalies at time \(t\) are largely attributed to Sensor 4, which was out of water for long periods of time.
The Herbert river is a challenging data set for all of the anomaly detection models, due to the sparse placement of sensors across the network, i.e., fewer sensors at greater distances apart resulting in weaker spatial relationships, and the test set contains a high proportion of anomalies (58%; see Table 1). GDN applied to the real-world dataset yields a recall of 29.2%, with GDN+ improving recall to 34.8%, see Table 3. Model performance suffers primarily due to the failure to detect the large anomaly spanning across 2022-01-10 in Figure 8. This may be attributed to the learned graph relationships being characterised by river events in the training data, and without such events, it is difficult to identify when a sensor is flat-lining. However, the model successfully identified anomalies spanning across 2021-12-27 and 2022-01-03, which coincided with river events.
Figure 9 shows the learned graph adjacency matrix, \(A\). Sensor 1, separated geographically by a large distance from the other sensors, has weaker relationships with the other nodes. Interestingly, the attention weights indicate that sensor 3 is strongly influenced by sensor 6, with tidal patterns from sensor 6 being evident in the predictions of sensor 3. Large error scores indicating an anomaly spanning across 2021-12-27 are primarily observed in sensor 2, impacted by anomalous sensors 3 and 4, where the predicted values are low. However, due to the small network of sensors in this case study, it is difficult to determine which sensor had the anomaly.
Adjusting the threshold in the GDN+ model can only enhance performance to a limited extent, since some anomalies may have insignificant error scores due to the underlying time series model. For instance, the anomaly spanning across 2022-01-10 has negligible error scores because the prediction model was performing well and no river events had occurred, making it challenging to detect the flat-lining type of anomaly in this particular scenario. Therefore, relying solely on the model may not be sufficient in practice, and we recommend implementing some basic expert rules. We introduce another model variant GDN++, by applying a simple filter to the time series ensuring that all values are positive, used in conjunction with adjusting the threshold calculation (GDN+). That is, an anomaly at time, \(t\), on sensor, \(i\), is flagged if, \(\max\{\mathbb{I}\{y^{(t)}_{i}<0\},A_{i}(t)\}=1\). GDN++ successfully detects all anomalies.
\begin{table}
\begin{tabular}{l l r r r r} \hline Data & Model & Rec & Prec & Acc & Spec \\ \hline Herbert & HDoutliers & **0.0** & **0.0** & 42.0 & 100.0 \\ & ARIMA & 30.5 & 62.9 & 51.3 & 77.5 \\ & DeepAnT & **1.6** & 39.7 & 44.0 & 97.0 \\ & GDN & 29.2 & 60.6 & 50.1 & 76.2 \\ & GDN+ & 34.8 & 59.2 & 50.4 & 70.0 \\ & GDN++ & **100.0** & 80.7 & 86.7 & 70.0 \\ \hline \end{tabular}
\end{table}
Table 3: Anomaly detection performance in terms of recall (%), precision (%), accuracy (%), and specificity (%) of GDN and its variants and baseline methods for the Herbert river case study.
Figure 7: The Herbert river system and sensor locations.
## Discussion
Multivariate anomaly detection and prediction models for spatio-temporal sensor data have the potential to transform water quality observation, modelling, and management [8, 7]. The provision of trustworthy sensor data has four major benefits: 1. It enables the production of finer-scale, reliable and more accurate estimates of sediment and nutrient loads, 2. It provides real-time feedback to landholders and managers, 3. It guides compliance with water quality guidelines, and 4. It allows for the assessment of ecosystem health and the prioritisation of management actions for sustaining aquatic ecosystems. However, technical anomalies in the data provided by the sensors can occur due to factors such as low battery power, biofouling of the probes, and sensor miscalibration. As noted by [7], there are a wide variety of anomaly types present within in-situ sensor data (e.g., high-variability, drift, spikes, shifts). Most anomaly detection methods used for water quality applications tend to target specific anomalies, such as sudden spikes or shifts [28, 29, 30]. Detecting persistent anomalies, such as sensor drift and periods of abnormally high variability, remains very challenging for statistical and machine learning research. Such anomalies are often overlooked by distance and kernel based methods, yet must be detected before the data can be used, because they confound the assessment of status and trends in water quality. Understanding the relationships among, and typical behaviours of, water quality variables and how these differ among climate zones is thus an essential step in distinguishing anomalies from real water quality events.
We investigated the graph-based neural network model, GDN [15], for its ability to capture complex interdependencies between different variables in a semi-supervised manner. As such, GDN offered the ability to capture deviations from expected behaviour within a high-dimensional setting. We developed novel bench-marking data sets for subsequence anomaly detection (of variable length and type), with a range of spatio-temporal complexities, inspired by the statistical models recently used for river network data [9]. Results showed that GDN tended to outperform the benchmarks in anomaly detection. We developed a model extension, GDN+, by adjusting the threshold calculation. GDN+ was shown to further improve performance. A replication study with multiple benchmarking data sets demonstrated consistency in these results. Sensor-based thresholds also proved useful in terms of identifying which neighbourhood the anomaly originated from in the simulation study.
We used a real-world case study of water level in the Herbert river, with non-stationary time series characterised by multiple river events caused by abnormal rainfall patterns. In this case, most of the anomalies appeared as flat-lining, due to the river drying out. Considering an individual time series, such anomalies may not appear obvious, as it is the failure to detect river events (in a multivariate setting) that is indicative of an anomalous sensor. Despite the challenges in the data, GDN+ was shown to successfully detect technical anomalies when river events
Figure 8: Test data of water level collected from sensors across the Herbert river (light blue), and the corresponding predictions from the GDN model (dark blue). Actual anomalies (red dots) are shown, along with the predicted anomalies (orange line) on the bottom.
Figure 9: Graph with learned adjacency matrix, \(A\). The edge weightings are determined by \(\alpha_{ij}\), which indicates attention and depend on model input, \(X^{(i)}\).
occurred, and combined with a simple expert-based rule, GDN++, all anomalies were successfully detected.
There are two recent methodological extensions to the GDN approach. First, the Fused Sparse Autoencoder and Graph Net [31] which extends the GDN approach by augmenting the prediction-based loss with a reconstruction-based term arising from the output of a sparse autoencoder, and a further extension that allows the individual sensors to have multivariate data. Second, a probabilistic (latent variable) extension is trained using variational inference [32]. Since these were published contemporaneously to the present research, and only the latter provided accompanying research code, these approaches were not considered in the paper. Future extensions of this work could consider incorporating the above methodological extensions, as well as developing on the existing software package.
Other applications could consider the separation of river events from technical anomalies. In terms of interpretability, since the prediction model aggregates feature data from neighbouring sensors, anomalous data can affect the prediction of any other sensor within the neighbourhood. Therefore, large error scores originating from one sensor anomaly can infiltrate through to the entire neighbourhood, and impairs the ability to attribute an anomaly to a sensor. The extensive body of literature on signal processing for source identification has the potential to inspire future solutions in addressing this issue [33, 34].
In summary, this work extends and examines the practicality of the GDN approach when applied to an environmental monitoring application of major international concern, with complex spatial and temporal interdependencies. Successfully addressing the challenge of anomaly detection in such settings can facilitate the wider adoption of in-situ sensors and could revolutionise the monitoring and management of air, soil, and water.
## Conclusions
This work studied the application of Graph Deviation Network (GDN) based approaches for anomaly detection on the challenging setting on river network data, which often feature sensors that generate high-dimensional data with complex spatio-temporal relationships. We introduced alternative defection criteria for the model (GDN+/GDN++), and their practicality was explored on both real and simulated benchmark data. The findings indicated that GDN and its variants were effective in correctly (and conservatively) identifying anomalies. Benchmark data were generated via an approach that was also introduced in this paper, along with open-source software, and may serve useful in the development and testing of other anomaly-detection methods. In short, we found that graph neural network based approaches to anomaly detection offer a flexible framework, able to capture and model non-standard, highly dynamic, complex relationships over space and time, with the ability to flag a variety of anomaly types. However, the task of anomaly detection on river network sensor data remains a considerable challenge.
## Author contributions
KB; Investigation, Formal Analysis, Methodology, Software, Validation, Visualisation, Writing - Original Draft Preparation. RS; Methodology, Supervision, Writing - Review & Editing. KM; Funding Acquisition, Conceptualisation, Supervision, Writing - Review & Editing. ES; Data Curation, Visualisation, Writing - Review & Editing.
## Grant information
This work was supported by the Australian Research Council (ARC) Linkage Project (LP180101151) titled "Revolutionising water-quality monitoring in the information age". Case study data were provided by the Department of Environment and Science, Queensland, and is available as part of the Python package gnnad.
## Acknowledgements
Lukasz Mentel; Software. Cameron Roberts; Data Curation. James Mcgree; Writing - Review & Editing.
|
2307.11093 | Multichannel Nonlinear Equalization in Coherent WDM Systems based on
Bi-directional Recurrent Neural Networks | Kerr nonlinearity in the form of self- and cross-phase modulation imposes a
fundamental limitation to the capacity of wavelength division multiplexed (WDM)
optical communication systems. Digital back-propagation (DBP), that requires
solving the inverse-propagating nonlinear Schr\"odinger equation (NLSE), is a
widely adopted technique for the mitigation of impairments induced by Kerr
nonlinearity. However, multi-channel DBP is too complex to be implemented
commercially in WDM systems. Recurrent neural networks (RNNs) have been
recently exploited for nonlinear signal processing in the context of optical
communications. In this work, we propose multi-channel equalization through a
bidirectional vanilla recurrent neural network (bi-VRNN) in order to improve
the performance of the single-channel bi-VRNN algorithm in the transmission of
WDM M-QAM signals. We compare the proposed digital algorithm to full-field DBP
and to the single channel bi-RNN in order to reveal its merits with respect to
both performance and complexity. We finally provide experimental verification
through a QPSK metro link, showcasing over 2.5 dB optical signal-to-noise ratio
(OSNR) gain and up to 43% complexity reduction with respect to the
single-channel RNN and the DBP. | Stavros Deligiannidis, Kyle R. H. Bottrill, Kostas Sozos, Charis Mesaritakis, Periklis Petropoulos, Adonis Bogris | 2023-06-24T09:43:21Z | http://arxiv.org/abs/2307.11093v2 | Multichannel Nonlinear Equalization in Coherent WDM Systems based on Bi-directional Recurrent Neural Networks
###### Abstract
Kerr nonlinearity in the form of self- and cross-phase modulation imposes a fundamental limitation to the capacity of wavelength division multiplexed (WDM) optical communication systems. Digital back-propagation (DBP), that requires solving the inverse-propagating nonlinear Schrodinger equation (NLSE), is a widely adopted technique for the mitigation of impairments induced by Kerr nonlinearity. However, multi-channel DBP is too complex to be implemented commercially in WDM systems. Recurrent neural networks (RNNs) have been recently exploited for nonlinear signal processing in the context of optical communications. In this work, we propose multi-channel equalization through a bidirectional vanilla recurrent neural network (bi-VRNN) in order to improve the performance of the single-channel bi-VRNN algorithm in the transmission of WDM M-QAM signals. We compare the proposed digital algorithm to full-field DBP and to the single channel bi-RNN in order to reveal its merits with respect to both performance and complexity. We finally provide experimental verification through a QPSK metro link, showcasing over 2.5 dB optical signal-to-noise ratio (OSNR) gain and up to 43% complexity reduction with respect to the single-channel RNN and the DBP.
Optical Fiber Communication, Nonlinear Signal Processing, Recurrent Neural Networks, Cross-Phase Modulation, Coherent Communication
## I Introduction
Communication rates exploiting optical fiber capacity has shown a remarkable exponential growth for more than 30 years, with the latest deployed systems operating even below a decibel from Shannon's limit [1, 2]. This pace is attributed to the evolution of optoelectronic devices that led to the adoption of coherent technology operating at very high bandwidth per channel and the advanced digital signal processing (DSP), in the form of probabilistic shaping [3], forward error correction codes [4] and digital compensation of the transmission impairments [5]. Periodically amplified, wavelength division multiplexed (WDM) systems covering medium to long distances are characterized by distributed non-linear effects, the most dominant of which arising from the intensity dependent refractive index (Kerr effect). The Kerr effect manifests itself as self-phase modulation (SPM) [6], cross-phase modulation (XPM) [7] or four wave mixing (FWM) [8] and is responsible for intra-channel distortions and inter-channel crosstalk. Intra-channel nonlinearity can be ideally compensated for either by digital back propagation (DPB) using the split-step Fourier method [9] or through the nonlinear Fourier transform [10].
Assuming ideal compensation of all intra-channel effects through DBP, XPM appears to be the principal source of impairments that fundamentally limits the information capacity of an optical communication system. The various techniques for nonlinear equalization treat XPM as a time-varying intersymbol interference (ISI) process and concentrate mainly in the zeroth-order XPM contribution [11, 12], known as phase and polarization-rotation noise. In this way, by tracking the temporal changes of the ISI, linear or turbo equalizers [11, 13] can partially mitigate its effects. On the other hand, higher order XPM contributions cannot be efficiently equalized unless very complex multi-channel full-field DBP [14] or Volterra equalizers [15] are employed. However, these multi-channel equalizers increase significantly the power consumption of the DSP and thus, they are considered inapplicable. The coupled-channel DBP [16] aims to reduce the complexity of full-field DBP by separately representing the WDM channels and explicitly accounting for their interaction during propagation. However, even with this simplification and the subsequent performance sacrifice, the complexity issue remains. Many works have concentrated on complexity savings by reducing the computational steps per span in DBP [17, 18], but this may be the case only for single-channel DBP, while the multi-channel implementations remain prohibitively complex. Maximum a posteriori [19] and maximum likelihood [20] decoding can also enhance the receiver performance, though they sacrifice simplicity. In recent years, machine learning (ML) techniques have penetrated the area of nonlinear channel equalization with their ability to track intersymbol dependencies [21, 22, 23]. Amongst other models, bidirectional Recurrent Neural Networks (bi-RNN) have proved efficient mitigation performance at moderate complexity [24, 25]. Furthermore, ML can also be used to improve the DBP method by
introducing a learned digital backpropagation technique that relies on parameterizing the split-step Fourier method [26].
In this work, we propose the merge of multi-channel equalization with ML in a multi-channel bidirectional Vanilla RNN (bi-VRNN) equalizer. The equalizer takes advantage of multi-input multi-output (MIMO) processing and decoding of the WDM channels. In this way, it exploits useful information from adjacent channels for a better tracking of inter-channel dependencies. Although MIMO equalization based on deep learning has been proposed in the past [27], this previous work aimed at compensating mainly for FWM at orthogonal frequency division multiplexed signals, and demonstrated marginal complexity improvement with respect to DBP. Furthermore, the complexity increased proportionally with the parallel processing of adjacent channels. Our proposition aims both at performance gains through the increase of useful information from adjacent WDM channels and complexity reduction when a multi-channel RNN equalizer is used instead of a single-channel one. We prove through extensive numerical simulations and experimental results that the proposed approach outperforms both typical multi-channel equalization in the form of adaptive equalizers and DBP and the single-channel bi-RNN, offering bit-error-rate (BER) improvement and/or Optical Signal to Noise Ratio (OSNR) gain compared to single channel bi-RNN and DBP. Moreover, we keep the complexity within competitive levels, illustrating more than 43% complexity reduction compared to to single channel bi-RNN and single-channel DBP.
The paper is organized as follows: In section II, we describe the architecture and operation principles of the RNN and particularly introduce the multi-channel equalization approach. In section III, we present simulation results that demonstrate the multichannel RNN equalizers' BER performance and show the advantages over the single channel RNN and DBP. In Section IV, we verify the improvement of multichannel RNN performance based on an experimental QPSK transmission system. Section V discusses further the aspects of computational complexity. Finally, we conclude the paper in Section VI.
## II Model Description
### _Multi-channel bi-VRNN_
In this work, the simulations consider transmission in typical single-mode fibers (SMFs) in the C-band. Since optical fiber is a nonlinear dispersive channel, we also include simulations where dispersion is lower than that of typical SMFs. Although this case is not frequently met in commercially deployed links and could be only served by non-zero dispersion shifted fibers, it is scientifically important to identify how dispersion values affect multi-channel equalization performance, especially if one takes into account the prospect of expanding long-haul transmission systems to other bands, such as O-band where lower dispersion than that of C-band is the case. Performance is evaluated by means of BER which is calculated through error counting at the receiver. The schematic of our model transmission system depicted in Fig.1, corresponds to 1200 km transmission, simulated with the integration of Nonlinear Schrodinger equation (NLSE), considering a fiber with attenuation coefficient \(a{=}0.2\) dB/km, second order dispersion coefficient \(\beta{=}\)-20ps\({}^{2}\)/km and Kerr nonlinear parameter \(\gamma=\)1.3 W\({}^{1}\)km\({}^{-1}\). Fiber propagation is modelled based on Manakov's equations using the split-step Fourier method [28]. We consider 9-channel WDM transmission on a 75-GHz wavelength grid and dual polarization 16-QAM modulation at 64 Gbaud. Lumped periodic amplification is modelled with a noise figure of 5 dB and a span length of 50 km. Prior to any post-processing or demodulation, we perform chromatic dispersion compensation with the use of an ideal frequency domain equalizer (FDE) in the case of RNN equalizer. Polarization demultiplexing and carrier synchronization are ideally carried out as well so as to focus solely on the nonlinear impairments. The simulations are conducted with pulse shaping incorporating root-raised-cosine (RRC) shaping, with a roll-off factor of 0.1 and matched-filtering at the receiver. Finally, one sample per symbol (sps) is sent to the RNN processing unit.
The layered diagram of the bi-VRNN equalizer is shown in Fig. 2 for three co-processed WDM channels. We use both polarization components of 3 (or more) adjacent channels as an input to our neural network in order to apply joint equalization of multiple WDM channels. In [29], we had used only the two polarization components of the channel
Fig. 1: The simulated 9-channel WDM transmission system containing a multi-channel coherent receiver. The DSP stage applies the two different equalizing techniques, DPB and RNN. In the case of RNN we first perform dispersion compensation using the frequency domain ideal equalizer (FDE)
to the neural network model instead, which provides less information to the equalization process. The input \(x\) to the Bi-VRNN is a \(N\)x\(L\)x12 vector representing the multi-channel 16QAM signal, where \(N\) denotes the total number of input symbols, and \(L=2k+1\) stands for the length of the input. The input \(x\) has a total of 12 features; the I and Q for both polarizations of each one of the 3 channels. At time \(t\), the input can be expressed as
\[x_{t}^{L}=\left[\begin{array}{cc}x_{t-k}\,,\ldots,x_{t-1},x_{t},x_{t+1}, \ldots,x_{t+k}\end{array}\right] \tag{1}\]
denoting that \(k\) preceding and \(k\) succeeding symbols to the current symbol \(x_{t}\) vector are used to track the inter-symbol dependencies. The length \(L\) depends on the foreseen channel memory, which relates to the accumulated dispersion in the SMF and the bandwidth limitation of the transceiver. The input \(x\) goes through the bi-VRNN with \(H\) hidden units, which process the input in both the forward and backward directions and each hidden output \(h_{i}\) is given by:
\[h_{t}=\tanh(W\cdot h_{t-1}\,+\,U\cdot x_{t}) \tag{2}\]
where the \(W\) and \(U\) matrices contain the weights of connections; \(x_{i}\), \(h_{i}\), \(h_{i-l}\) are the input, hidden output, and previous hidden output, respectively; and _tanh_ denotes the hyperbolic tangent activation function. The same output of the bi-VRNN layer, i.e., \(h_{i}\), is then sent in parallel to six (6) Fully Connected Layers (FCL), which correspond to both polarizations of each one of the 3 channels jointly detected. Each FCL consists of two-state neurons, representing the \(I\) and \(Q\) of 16-QAM and its output \(y\) is a \(N\)x\(L\)x2 vector. The FCL adopts the regression approach, which significantly reduces the implementation complexity without sacrificing BER performance, compared to the symbol-wise classification approach [29]. The bi-VRNN is trained using the _many-to-many_ approach which produces simultaneously the same number of symbols \(y\) as those at the input \(x\) and is capable of extracting multiple symbols concurrently [29]. The bi-VRNN equalizer was built, trained and evaluated in Keras with Tensorflow 2.10 GPU backend. Mean square error (MSE) and Adam were chosen as the loss function and optimizer for BER evaluation, respectively. We considered 100,000 symbols for training, 50,000 symbols for validation, and 100,000 symbols for testing with unknown data. The training stage was executed with batches of 512 words of symbols and 1000 epochs for the single channel or 2000 for the multi-channel cases. We note that the bi-VRNN model is adopted in this work, because it offers similar performance compared to other RNN models (e.g., LSTM and GRU) while exhibiting lower computational complexity [29].
### _Digital Backpropagation and linear regression_
As a well-established benchmark of the proposed RNN-based equalization system, we employ the multi-channel full field DBP along with a MIMO adaptive equalizer. It is known that an adaptive equalizer is necessary after the FDE in order to compensate for residual linear effects, while in [30], it is shown that even after DBP, an adaptive MIMO algorithm offers a slight performance improvement. The full-field DBP is considered impractical for real-life implementations as it
Fig. 3: BER as a function of launched optical power for single-channel and multi-channel processing, for RNN vs DBP in the C-band (a). In (b) RNN performance is evaluated at two dispersion regimes (b), respectively. Only the central’s channel BER performance is plotted. Adjacent channels exhibit similar performance.
Fig. 2: Diagram of the Bidirectional Vanilla-RNN equalizer in the case of joint training and equalization of 3 WDM channels
requires a receiver with the bandwidth of the entire multi-channel spectrum [14]. However, this is the optimal benchmarking scenario for the simulation results and any other DBP variant would be inferior performance-wise. In our case, for adjusting the number of channels in the back-propagated field, we simply select the channels of interest through an optical filtering process. We then sample the entire received field at 4 sps applying the DBP with 20 steps-per-span. While 2 sps and 2 or 4 steps-per-span would be more realistic, we use these unfeasible parameters in order to compare bi-RNN approach with an almost ideal, channel-aware and greedy in terms of bandwidth and simulation steps equalizer. Following the DBP, the channels are separated with digital filters, downconverted in baseband, downsampled to 1 sps. Matched filtering is applied to each one. Finally, the linear adaptive algorithm with 21 taps is implemented separately for the real and the imaginary part of each channel.
## III Simulation Results
### _Bi-VRNN versus DBP results_
In Fig. 3, the equalization results presented in the form of BER values as a function of the launched optical power are depicted after having accomplished training of the RNN model under different conditions. We consider 9-channel dense WDM transmission with 75 GHz wavelength grid for a typical SMF in the C-band (second order dispersion parameter \(\beta\)=-20 ps\({}^{2}\)/km (Fig 3.a) and a low-dispersion fiber (\(\beta_{2}\)=- 2 ps\({}^{2}\)/km) over 1200km of fiber (Fig 3.b). We compare the BER improvement offered by the multi-channel detection scheme for 3 and 5 neighboring channels versus the application of the DBP equalization technique using a corresponding number of channels. In Fig.3 only the BER of the central channel is depicted. Similar behavior is recorded for all co-processed channels. Despite the fact that the low-dispersion channel memory is much smaller than that of the typical C-band regime, for the sake of equal comparison, the length of the input word (\(L\)) is set to 51 for both bands.
The utilization of multi-channel equalization through joint multi-channel training of the RNN model is observed to deal with the inter-channel nonlinear crosstalk caused by XPM and FWM effects, as bi-VRNN exhibits better performance than in the single-channel detection. The results show that for the C-band, the minimum bit error rate (BER) for single-channel detection is 1.3x10\({}^{\text{-2}}\)@ -4 dBm, while multi-channel processing yields a minimum BER of 8.6x10\({}^{\text{-3}}\) @ -4 dBm and 7.3x10\({}^{\text{-3}}\) @ -3.5 dBm for 3 or 5 co-processed channels, respectively, as illustrated in Fig. 3a. Compared to DBP equalizers, which demonstrate improved performance with an increasing number of channels (minimum BER of 1.9x10\({}^{\text{-2}}\) @ -6 dBm, 1.1x10\({}^{\text{-2}}\) @ -6 dBm and 6.9x10\({}^{\text{-3}}\) @ -4 dBm, for 1, 3 and 5 channel respectively), the training and evaluation of 5 adjacent channels using bi-VRNN performs slightly better than the use of 3 channels in DBP which is operated almost ideally here in terms of the samples per symbol and transmission steps, as discussed in Section II. Therefore, the multi-channel bi-RNN equalizer achieves almost ideal equalization performance similar to that offered by a greedy multi-channel DBP implementation.
While the equalization performance shows a fair improvement in the high dispersion regime when multi-channel equalization is used, in the case of low-dispersion regime, the contribution of neighboring channels to the equalization performance is less beneficial as shown in fig. 3b. Thus, besides the significant impact of adjacent channels as the dispersion decreases, the coherence time of the channel also decreases resulting in inter-channel crosstalk with a high frequency content comparable to the symbol rate. In such scenarios, the RNN model is not able to learn from inter-channel interference and incorporate it into the equalization process. This finding is similar to what had been observed in [29] when the BER performance of bi-RNN was investigated at different dispersion regimes for single-channel equalization. It had been become evident that the nonlinear equalization performance of bi-RNN was superior as channel memory increased due to dispersion. Consequently, the burden of training the model with the neighboring channels in a low-dispersion environment
Fig. 4: BER as a function of launched optical power for single-channel and multi-channel processing for bi-RNN and a simplified Linear Regression equalizer (LR).
Fig. 5: BER as a function of a) bi-VRNN’s hidden units and b) training epochs
compared to the C-band scenario, results in a marginal improvement in terms of BER. In specific, a minimum BER of 1.5x10\({}^{2}\) @ -4 dBm, 1.3x10\({}^{2}\) @ -4 dBm and 1.2x10\({}^{2}\) @ -4 dBm, for 1, 3 and 5 channels respectively, was observed.
We also test the case of pure linear equalization by means of linear regression, indicatively in the C-band scenario. We consider the network of fig. 2, by completely removing the bi-VRNN layer and using solely linear regression applied to the FDE output. The effect of the linear algorithm, applying it to 1, 3 or 5 channels in a MIMO configuration is negligible, as can be seen in fig. 4, proving that bi-VRNN not only improves the BER performance in single-channel detection, but it also offers further improvement if simultaneously trained with the sequences of multiple co-propagating channels. The higher gain is observed when moving from 1 to 3 channels, than from 3 to 5 channels in the joint equalization process. This finding is reasonable as a WDM channel is mostly impaired by its two neighboring channels.
### Dependence of the RNN on hidden units and epochs
In the optimization of the RNN equalization capacity, expressed by the number of its hidden units (h.u.) \(H\), we validate the reasonable fact, that the gradual addition of h.u is necessary as the number of training channels increases. In fig. 5a it can be concluded that 16, 18 and 20 h.u are required for reaching the minimum BER performance for the single and multi-channel case of 3 or 5 channels respectively. Providing a larger network through increasing the number of h.u. does not improve the BER performance any further. Similarly, training the multi-channel network requires a larger number of epochs compared to the single-case scenario, fig. 5b. This can be explained by the fact that the additional channels increase the information at the input, and therefore a larger number of weights must be utilized to perform satisfactory equalization. This is the reason why we trained the single-channel scenario with 1000 epochs and the multi-channel scenario with 2000 epochs. It must be noted that the slight increase in h.u is not a severe shortcoming in terms of complexity as it will be shown in section V. Moreover, for static point to point WDM transmission systems where co-propagating channels remain stable over time, the substantial increase in training complexity is not a major issue, as training is not expected to take place very frequently in a real-life scenario [29].
## IV Experimental Results
### Experimental Setup
In this section we present experimental results which confirm the numerically derived findings. The experimental transmission system depicted in Fig. 6 consists of three independent transmitters, each composed of a \(\sim\)25 kHz linewidth laser, followed by IQ-modulators that were themselves driven by arbitrary waveform generators (AWGs). The end result is three channels, CH1, CH2 and CH3, sitting at frequencies of 193.5 THz, 193.6 THz and 193.7 THz, respectively. Each channel is delivering 2\({}^{16}\) symbols of uncorrelated, random data using single-polarization, 22.5 Gbit/s QPSK signaling. We carefully selected the process for providing pseudorandom unrepeated sequences using rng('shuffle') and the very long period of 2\(\times\)19937-1 Mersenne Twister generator (as in simulations), so as to avoid bi-RNN to predict the next symbol of the pseudorandom sequence and overestimate the nonlinearity mitigation results. After signal generation, the three channels are co-polarized with polarization controllers before being multiplexed using an arrayed waveguide grating and amplified with an erbium doped fiber amplifier (EDFA). An attenuator allows for the control of the power launched into the transmission link. Transmission is carried out over a part of the UK's National Dark Fibre Facility (NDFF) and comprises two 91.9 km spans of field deployed SMF28e+ fiber on the route Southampton-Reading-Southampton. The signals are amplified with an EDFA mid-link such that the total power launched into the first and the second span of SMF28e+ will be the same. The receiver contains a variable optical attenuator to facilitate receiver characterization, a demultiplexing stage to select out the channel under measurement, an EDFA to optically pre-amplify the signal and an optical bandpass filter (OBF) in order to reject
Fig. 6: The experimental transmission system
Fig. 7: Brute-force technique for temporally aligning the three independently captured (20 dBm launched power, -10 dBm receiver power). When temporal alignment was achieved, joint equalization improved BER from an average of 3.6x10\({}^{2}\) to a sharp dip of 2.5x10\({}^{2}\).
out-of-band noise. The signal is finally captured using a coherent receiver of typical construction, consisting of an optical hybrid followed by 4 balanced photoreceivers whose output is measured by a 40 GSa/s digital storage oscilloscope. The local oscillator and the carrier for each channel were sourced from the same original lasers. Once the waveforms had been captured (one at a time) for a range of powers launched into the link, they could be processed offline.
### _DSP Processing_
Prior to any nonlinear postprocessing or demodulation, we perform polarization alignement, bulk chromatic dispersion compensation with the use of a FDE and adaptive equalization using constant modulus algorithm (CMA) with 15 taps. We then perform carrier synchronization, clock extraction, and finally resampling. Eventually, one sample per symbol is sent to the bi-RNN processing unit. The only difference in the case of DBP post-processing is the removal of the FDE block, which instead is embedded in each step of the DBP process. The DBP is performed with 2 sps and 20 steps-per-span, before the CMA.
We use the 3 neighboring single-polarization channels as an input to our neural network in order to apply joint equalization. Its input \(x\) is a \(N\times L\times 6\) vector of QPSK symbols where \(N\) denotes the total number of input symbols, and \(L=2k+1\) stands for the length of the input, similarly to what we followed in the numerical analysis, with a proper modification from dual polarization 16-QAM to single-polarization QPSK. The same output of the bi-VRNN layer is then sent in parallel to 3 Fully Connected Layers (FCL), which correspond to each one of the 3 channels. Each FCL performs regression and consists of two neurons, representing the I and Q of QPSK. We considered 50,000 symbols for training, 5,000 symbols for validation, and 10,000 symbols for testing with unknown data.
The experimental process provided data for a wide range of input and received power values per channel. This data enables the direct derivation of BER performance for each channel when only linear equalization is performed or after the bi-RNN single channel equalization. Multi-channel equalization is not a trivial task to be carried out experimentally, since the data to be processed offline by the bi-RNN algorithm were collected in an asynchronous manner from the oscilloscope. In order to temporally align the three independently captured channels to the model, we had to use a time-consuming brute-force technique to identify the exact timing by performing joint equalization, initially for two out of three channels, for all possible combinations of their relative temporal position. For instance, as depicted in the example of fig. 7, by focusing on
Fig. 8: BER as a function of received optical power for single- and multi-channel processing when the transmitted power per channel is 16 dBm (a), (c) and 20 dBm (b), (d) respectively. In (a) and (b) the performance of all transmitted and detected channels is depicted, whilst in (c) and (d) the performance for RNN, DBP or without any non-linear equalization (w/o NLE) considering the central channel is depicted.
Fig. 9: Number of multiplications vs hidden units for single and multi-channel equalization.
inter-channel effects between ch2-ch3 and by applying joint bi-VRNN equalization for different time shifts, we can identify the point where the BER is evidently improved as a result of a better temporal alignment of the two channels in terms of the inter-channel effects recognized by the RNN model. This process had to be repeated several times for fine-tuning (in fig. 7 the time-shift step is 100 symbols while at a second stage shifts of up to 10 symbols were carried out for precise synchronization) and for all the operating conditions of the optical power at the input and output. Through this brute-force method it was proven that there is a relative temporal position among all channels which results in a BER improvement for multi-channel processing (fig. 7), thus experimentally confirming the capability of the bi-RNN model to track inter-channel inter-dependencies. Indicatively, at a launch power of 20 dBm and a received power of -10 dBm, we observe a noticeable decrease in the BER from an average value of 3.6x10\({}^{-2}\) in temporally unrelated channels to 2.5x10\({}^{-2}\). It must be noted that in a practical system, the detection of co-propagating channels will be synchronous, therefore the data will be directly sent to bi-RNN for training and inference without the need of temporal alignment.
### _Multi-channel versus single-channel bi-VRNN and DBP_
In Fig. 8, one can study the performance of bi-VRNN in single-channel and multi-channel equalization based on experimental data. The equalization performance exhibits BER improvement of up to an order of magnitude compared to single-channel processing. For instance, considering ch2 (central) at a launched power of 16 dBm (fig. 8a), the BER improves from 1x10\({}^{-3}\) to 4x10\({}^{-4}\). Even at the operating point of stronger nonlinear interaction (20 dBm launched power, fig. 8b), the joint equalizer can still provide a fair improvement in BER performance at all the received power values. In fig. 8a and 8b we notice that all 3 channels benefit almost equally from the joint processing. In the scenario of a launched power of 20 dBm and a received power of -10 dBm, we notice a reduction in the BER from 2.8x10\({}^{-2}\) to 1.8x10\({}^{-2}\) for Ch1 and from 2.7x10\({}^{-2}\) to 8x10\({}^{-3}\) for Ch3, respectively, upon applying single-channel and multi-channel equalization.
The fact that the single-channel RNN is very close to a DBP with multiple steps-per-span (fig. 8c-d), indicates that it constitutes a near-optimum nonlinear equalizer of intra-channel effects. The further improvement that the 3-channel RNN offers, can therefore be attributed to the efficient XPM mitigation, rather than in residual SPM compensation. In fig. 8d, for the 20 dBm launched power the optical signal-to-noise ratio (OSNR) gain compared to single-channel bi-RNN is almost 7.5 dB _@_ BER=2x10\({}^{-2}\), whilst for the 16 dBm launched power (fig. 8c), the OSNR gain is close to 2.5 dB _@_ BER=10\({}^{-3}\), indicating that the proposed equalizer performs better in highly nonlinear environments.
## V Complexity Analysis
In order to evaluate the computational complexity of multichannel equalization, we calculate the number of multiplications. The inference computational complexity, expressed as the number of multiplications following the many-to-many approach is expressed as [28]:
\[bi-VRNN_{mult}=2(FH+H^{2})L+2H\ L\ y \tag{3}\]
where \(F\), \(y\) are the input features and output vectors respectively. The parameters \(F\) and \(y\) are equal to \(2m\), since we have 2 inputs and outputs (I/Q components) for each channel, where \(m\) is the number of jointly equalized channels. We use \(H\)=16, 18 and 20 h.u. for single and joint detection of 3 and 5 neighbouring channels. Based on the fact that at least 80% of neighbouring symbols exhibit a similar BER in the many-to-many approach [29] and since we have used a training word length of 51 symbols, we utilize the 41 central ones to calculate the number of multiplications in total. In fig. 9 we plot the total number of inference multiplications for the single and the multichannel cases versus the bi-VRNN's hidden units. One can see that although the number of multiplications increases with the addition of more input/output channels (features), eventually the number of multiplications per symbol decreases drastically. We calculate 478 multiplications per detected symbol (mps) for the single channel of 16 h.u, 313 mps for 3 extracted channels of 18 h.u. (34.38% complexity reduction compared to single channel), and 299 mps for 5 extracted channels of 20 h.u. (37.5% complexity reduction compared to single channel). The complexity reduction of multi-channel equalization over the single-channel counterpart in the scenarios we have considered can reach up to 58.7% in the case of 22 h.u with 821mps for the 1-channel relative to 339 mps for the 5-channel detection (fig. 9). The interpretation of the reduction of computational complexity in the case of multichannel RNN equalizer is found in (3) which reveals that most multiplications relate to the number of h.u. In fact, by taking advantage of the structure of the network and by adding a reasonable overhead (additional input and extracting channels), we manage to reduce the DSP processing per symbol whilst significantly improving the BER. Figures 4 and 5 demonstrate that the use of more than 3 channels contributes less to the improvement of the BER. Although it is out of the scope of this study, the hardware implementation of RNN equalizers in optical communication systems is a challenge that already concerns the research community [31]. We believe that the DSP implementation of a three-channel bi-RNN equalizer requiring a rather reasonable number of h.u. is feasible. This will be the subject of future work.
Concerning the computational complexity of the RNN in the experimental setup, we calculate 796 mps for the single extracted channel with 16 h.u. and 448 mps for the 3 extracted channels with 18 h.u. which is equivalent to 43.75% complexity reduction compared to the single channel RNN.
For the DBP algorithm the complexity per bit can be evaluated according to [32]:
\[C_{DBP}\ =\ 4N_{\text{Sspsp}}N_{\text{spans}}\left[\frac{N(log_{2}N+1)n_{s}}{(N -N_{4}+1)log_{2}M}+n_{s}\right] \tag{4}\]
where \(N_{\text{Ssp}}\) is the number of spans, \(N_{\text{Sspsp}}\) is the number of steps per span, \(N\) is the FFT size (depending on accumulated dispersion per span), \(n_{s}\) are the sps, \(M\) is the constellation order and \(N_{d}=n_{\text{T}}\upsilon/T\), where \(\upsilon\) corresponds to the dispersive channel impulse response and \(T\) is the symbol duration. We multiply the complexity by 4 since for DBP we use complex numbers and one complex multiplication is equal to four real multiplications.
Considering _N_=256 and _N_=30, with 20 steps-per-span for the 2 spans, we calculate 4528 mps. Even if we reduce the steps-per-span to 4 and the size of the FFT to 128, significantly sacrificing performance, the DBP would require at least 918 mps for a single channel.
In order to achieve a fair comparison between the bi-RNN and the DBP approach, it is imperative to incorporate into the former the 81 mps necessitated by the FDE, according to:
\[C_{FDE}\ =\ 4\left[\frac{N(log_{2}N+1)n_{g}}{(N-N_{d}+1)}\right] \tag{5}\]
Taking into account FDE, the multi-channel bi-RNN approach with 18 h.u (requires 529 mps) would offer over 42,37% complexity reduction per channel with respect to a simplified DBP implementation (918mps), when 3 channels are incorporated, and 88,32% reduction compared to the greedy 20 step-per-span DBP (4528mps) considered in this paper. It is important to highlight that the computational complexity of the bi-RNN is not affected by the transmission length, whereas the DBP experiences a significant escalation in complexity with an increase in the number of spans. Additionally, both DBP and FDE equalizer require a minimum of 2 sps, in contrast to bi-RNN, which demonstrates satisfactory performance with only 1sps.
## VI Conclusion
The paper investigates a bi-RNN based equalizer for joint training and equalization of adjacent channels of a WDM coherent system. We provided simulation results considering a polarization multiplexed 16-QAM transmitter operating at 64GBd and experimental results with a WDM QPSK coherent system operating at 22.5 Gb/s. The simulation results revealed that multi-channel nonlinear equalization by means of bi-RNN is meaningful and can provide a clear performance advantage especially in high dispersion regimes. The theoretical findings were confirmed by the experimental process incorporating 3-channel equalization. The proposed multi-channel equalization scheme exhibits BER improvement of up to an order of magnitude compared to a single channel detection equalizer while reducing the computational complexity per symbol beyond 43%. We believe this multi-channel equalizer can offer a practical solution for enabling reach extension in point-to-point digital coherent transmission systems, while requiring a moderate increase in receiver complexity. Next steps of our research will be directed to the implementation of a 3-channel bi-RNN equalizer in a field-programmable gate array platform.
|
2303.02518 | Attention-based convolutional neural network for perfusion T2-weighted
MR images preprocessing | Accurate skull-stripping is crucial preprocessing in dynamic susceptibility
contrast-enhanced perfusion magnetic resonance data analysis. The presence of
non-brain tissues impacts the perfusion parameters assessment. In this study,
we propose different integration strategies for the spatial and channel squeeze
and excitation attention mechanism into the baseline U-Net+ResNet neural
network architecture to provide automatic skull-striping i.e., Standard scSE,
scSE-PRE, scSE-POST, and scSE Identity strategies of plugging of scSE block
into the ResNet backbone. We comprehensively investigate the performance of
skull-stripping in T2-star weighted MR images with abnormal brain anatomy. The
comparison that utilizing any of the proposed strategies provides the
robustness of skull-stripping. However, the scSE-POST integration strategy
provides the best result with an average Dice Coefficient of 0.9810. | Svitlana Alkhimova, Oleksii Diumin | 2023-03-04T22:40:59Z | http://arxiv.org/abs/2303.02518v1 | # Attention-based convolutional neural network for perfusion T2-weighted MR images preprocessing
###### Abstract
Accurate skull-stripping is crucial preprocessing in dynamic susceptibility contrast-enhanced perfusion magnetic resonance data analysis. The presence of non-brain tissues impacts the perfusion parameters assessment. In this study, we propose different integration strategies for the spatial and channel squeeze and excitation attention mechanism into the baseline U-Net+ResNet neural network architecture to provide automatic skull-stripping i.e., Standard scSE, scSE-PRE, scSE-POST, and scSE Identity strategies of plugging of scSE block into the ResNet backbone. We comprehensively investigate the performance of skull-stripping in T2*-weighted MR images with abnormal brain anatomy. The comparison that utilizing any of the proposed strategies provides the robustness of skull-stripping. However, the scSE-POST integration strategy provides the best result with an average Dice Coefficient of 0.9810 \(\pm\) 0.006.
**Keywords:** skull-stripping, brain, segmentation, region of interest, deep neural network, dynamic susceptibility contrast perfusion, magnetic resonance imaging.
**Introduction**
Nowadays one of the most commonly used perfusion techniques is dynamic susceptibility contrast (DSC) MR imaging [1]. It produces ultrafast T2- or T2*-weighted sequences of images that allow providing perfusion analysis in cases of oncological diseases (i.e., examining in detail vascular permeability, vessel caliber, tumor cell size, and cytoarchitecture [2, 3]) and ischemic stroke, neurovascular disease, and neurodegenerative disorders [3, 4].
The procedure of skull-striping, also recognized as the segmentation of brain from non-brain tissues or brain extraction, is one of the preprocessing steps in DSC MR data analysis [5]. It is crucial for accurate results of perfusion parameters assessment since the presence of non-brain tissue pixels in analyzed images can lead to visual artifacts in perfusion maps and falsely high or falsely low perfusion parameter values [6, 7].
**Manual, semi-automatic, and automatic procedures of skull-striping**
There are three main approaches to skull-striping procedure: manual delineation of a brain region, semi-automatic, and automatic segmentation of brain from non-brain tissues.
In the case of manual delineation of a brain region, the operator must be a highly trained specialist to identify different anatomical structures and lesions of the brain in MR images. As manual delineation of a brain region is performed by layer-by-layer determining the brain boundaries over the whole volume of MR data of the human head, it is a complex laborious task. It is complicated by several factors such as low image spatial resolution, the absence of intensity standardization, and obscure boundaries of the brain, especially at border pixels lying near areas with abnormal brain anatomy [8]. Additionally, the manual procedure is generally subjective, i.e., obtained results are suffering from considerable intra- and inter-rater variability [9].
Taking into account all of the above, it is desirable to automate the procedure of skull-striping procedure.
In the case of a semi-automatic segmentation of brain from non-brain tissues, the detection of a brain region is assisted by a variety of tools that provides pixel thresholding. An initial threshold value can be provided automatically by the histogram analysis. In most cases, the thresholding results require further correction. The common user interface of such tools is sliders or turning the wheel or holding down the mouse buttons in a predefined mode.
The current automatic skull-striping procedure can be broadly grouped into two groups: intensity-based methods and template-based methods [10, 11].
Intensity-based methods [12-15] provide inaccurate results of segmentation because of overlapping pixel intensities in regions with abnormal brain anatomy and regions which are targeted to be excluded.
Template-based methods suffer from a lack of pre-segmented templates for different age-sex-race-specific patients and different shapes, densities, and locations of the brain lesions [16]. Thus, the proposed methods currently are applicable for the segmentation of healthy subject images [17] or with specific brain lesions [18].
Recently deep learning-based methods have attracted enormous attention in medical image processing and have become the state-of-the-art for segmentation tasks. **The objective of this study** is to give a comprehensive performance comparison of different integration strategies for the attention mechanisms into the baseline U-Net+ResNet neural network architecture for automatic skull-striping in T2*-weighted MR images with abnormal brain anatomy.
**Materials and experiment**
Concurrent spatial and channel squeeze and excitation (scSE) attention mechanism was proposed as attention mechanisms to get an improvement in performance [19]. However, there can be different strategies for the scSE block integration. The Standard scSE block integration strategy is applied right after the final convolutional layer, right before the merging of the skip connection. The scSE-PRE integration strategy consists of plugged scSE block before the first convolutional layer. The scSE-POST integration
strategy consists of plugged scSE block after the merging of the skip connection. Finally, the scSE-Identity integration strategy applies the scSE block in the skip connection branch itself, i.e., parallel to the main block.
We implemented the architecture of a deep neural network that combines U-Net [20] and ResNet [21] to provide automatic skull-striping in T2*-weighted MR images with abnormal brain anatomy (Fig. 1.)
To provide a comparison of different integration strategies for the attention mechanisms into the baseline U-Net+ResNet neural network architecture, four strategies of plugging scSE block into the ResNet backbone were implemented, i.e., Standard scSE, scSE-PRE, scSE-POST, and scSE-Identity (Fig. 2).
We used only T2*-weighted MR data generated by the TCGA Research Network: [http://cancergenome.nih.gov/](http://cancergenome.nih.gov/) within a study of Glioblastoma Multiforme. We used 32
Figure 1. The U-Net+ResNet neural network architecture used in this study for skull-striping in T2*-weighted MR images.
Figure 2. scSE block integration designs explored in this study.
three-dimensional volumes of different subjects. The ground-truth masks were manually created by a group of two expert radiologists. Train-validation-test split ratio was 68-12-20. All processing steps were performed using the 4th time-point image from each analyzed space position.
Each model training was performed by a stochastic gradient descent with an Adam optimizer learning rate of 0.00005. In case of no improvement in sparse categorical cross-entropy loss for 10 epochs, the learning rate was divided by 10. The training was run for 100 epochs using mini-batches of 16 images and training data shuffling at the beginning of every epoch.
To quantitatively evaluate the performance of different integration strategies for the scSE attention mechanisms into the baseline U-Net\(+\)ResNet architecture, we employ several evaluation metrics such as Dice Coefficient, Sensitivity, Specificity, and Accuracy.
**Results**
We employed the best model of each analyzed case for the test dataset.
The scSE-POST integration strategy of the attention mechanisms offered the best result with an average Dice Coefficient of \(0.9810\pm 0.006\). The complete evaluation metrics are shown in Table 1.
The loss and accuracy change in the validation phase are plotted in Figure 3. According to the result, it is easy to conclude that scSE-POST integration strategy of the attention mechanisms offers the least loss (0.0363 at 69th epoch) and the best accuracy (0.9855 at 69th epoch).
\begin{table}
\begin{tabular}{c|c|c|c|c} & **Dice Coefficient** & **Sensitivity** & **Specificity** & **Accuracy** \\ \hline
**Without-scSE** & 0,9777\(\pm\)0,004 & 0,9630\(\pm\)0,009 & 0,9977\(\pm\)0,001 & 0,9888\(\pm\)0,003 \\ \hline
**Standard-scSE** & 0,9726\(\pm\)0,004 & 0,9514\(\pm\)0,007 & 0,9983\(\pm\)0,001 & 0,9864\(\pm\)0,003 \\ \hline
**scSE-PRE** & 0,9572\(\pm\)0,005 & 0,9243\(\pm\)0,017 & 0,9976\(\pm\)0,002 & 0,9789\(\pm\)0,005 \\ \hline
**scSE-POST** & 0,9810\(\pm\)0,006 & 0,9752\(\pm\)0,009 & 0,9955\(\pm\)0,002 & 0,9904\(\pm\)0,002 \\ \hline
**scSE-Identity** & 0,9621\(\pm\)0,006 & 0,9302\(\pm\)0,010 & 0,9988\(\pm\)0,001 & 0,9813\(\pm\)0,004 \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison of different integration strategies for the scSE attention mechanisms into the baseline U-Net\(+\)ResNet architecture, mean \(\pm\) SD.
Figure 3: The change of loss (left) and accuracy (right) in the validation phase.
The visualization of skull-stripping comparison is shown in Figure 4.
The overall performance comparison demonstrates that the scSE-POST integration strategy of the attention mechanisms offers the best results in terms of all evaluation metrics except specificity (i.e., this strategy produces a little bit higher level of missed true pixels of the non-brain region.). The small value of the standard deviation for all integration strategies indicates the robustness of deep learning-based skull-stripping using U-Net+ResNet neural network architecture with scSE attention mechanisms in T2*-weighted MR images with abnormal brain anatomy.
**Conclusions**
In this study, we conduct a performance comparison of different integration strategies for the attention mechanisms into the baseline U-Net+ResNet neural network architecture for automatic skull-stripping.
All integration strategies show the robustness of skull-stripping in T2*-weighted MR images with abnormal brain anatomy. However, the scSE-POST integration strategy of the attention mechanisms provides the best result.
|
2301.02361 | A Bayesian Neural Network Approach for Tropospheric Temperature
Retrievals from a Lidar Instrument | We have constructed a Bayesian neural network able of retrieving tropospheric
temperature profiles from rotational Raman-scatter measurements of nitrogen and
oxygen and applied it to measurements taken by the RAman Lidar for
Meteorological Observations (RALMO) in Payerne, Switzerland. We give a detailed
description of using a Bayesian method to retrieve temperature profiles
including estimates of the uncertainty due to the network weights and the
statistical uncertainty of the measurements. We trained our model using lidar
measurements under different atmospheric conditions, and we tested our model
using measurements not used for training the network. The computed temperature
profiles extend over the altitude range of 0.7 km to 6 km. The mean bias
estimate of our temperatures relative to the MeteoSwiss standard processing
algorithm does not exceed 0.05 K at altitudes below 4.5 km, and does not exceed
0.08 K in an altitude range of 4.5 km to 6 km. This agreement shows that the
neural network estimated temperature profiles are in excellent agreement with
the standard algorithm. The method is robust and is able to estimate the
temperature profiles with high accuracy for both clear and cloudy conditions.
Moreover, the trained model can provide the statistical and model uncertainties
of the estimated temperature profiles. Thus, the present study is a proof of
concept that the trained NNs are able to generate temperature profiles along
with a full-budget uncertainty. We present case studies showcasing the Bayesian
neural network estimations for day and night measurements, as well as in clear
and cloudy conditions. We have concluded that the proposed Bayesian neural
network is an appropriate method for the statistical retrieval of temperature
profiles. | Ghazal Farhani, Giovanni Martucci, Tyler Roberts, Alexander Haefele, Robert J. Sica | 2023-01-06T02:55:38Z | http://arxiv.org/abs/2301.02361v1 | # A Bayesian Neural Network Approach for Tropospheric Temperature Retrievals from a Lidar Instrument
###### Abstract
We have constructed a Bayesian neural network able of retrieving tropospheric temperature profiles from rotational Raman-scatter measurements of nitrogen and oxygen and applied it to measurements taken by the Raman Lidar for Meteorological Observations (RALMO) in Payere, Switzerland. We give a detailed description of using a Bayesian method to retrieve temperature profiles including estimates of the uncertainty due to the network weights and the statistical uncertainty of the measurements. We trained our model using lidar measurements under different atmospheric conditions, and we tested our model using measurements not used for training the network. The computed temperature profiles extend over the altitude range of 0.7 km to 6 km. The mean bias estimate of our temperatures relative to the MeteoSwiss standard processing algorithm does not exceed 0.05 K at altitudes below 4.5 km, and does not exceed 0.08 K in an altitude range of 4.5 km to 6 km. This agreement shows that the neural network estimated temperature profiles are in excellent agreement with the standard algorithm. The method is robust and is able to estimate the temperature profiles with high accuracy for both clear and cloudy conditions. Moreover, the trained model can provide the statistical and model uncertainties of the estimated temperature profiles. Thus, the present study is a proof of concept that the trained NNs are able to generate temperature profiles along with a full-budget uncertainty. We present case studies showcasing the Bayesian neural network estimations for day and night measurements, as well as in clear and cloudy conditions. We have concluded that the proposed Bayesian neural network is an appropriate method for the statistical retrieval of temperature profiles.
Atmospheric Temperature; Neural Networks; Bayesian Deep Learning; Raman Lidar; Atmospheric Retrievals
## 1 Introduction
Enhancing our knowledge of trends and variability in atmospheric temperature is essential to better understanding weather and climate change's impacts and causes[1, 2, 3, 4]. Tropospheric warming has been measured by different observational platforms since the mid-twentieth century. However, the confidence level in the rate of its change and its vertical structure remains relatively low, which can limit the ability to produce reliable inferences about the true long-term trends. Continuous measurements along with developing reliable methods of retrieving temperature profiles are vital for climate change studies. Pure Rotational Raman (PRR) lidars are ground-based remote sensing instruments with excellent vertical and temporal resolutions, suitable to provide accurate tropospheric temperature profiles [5, 6, 7]. Typically, to retrieve temperature profiles the ratio of measurements from two pre-processed PRR signals is calculated. The PRR spectrum contains two symmetrically positioned Stokes and anti-Stokes branches on either side of the excitation line with approximately the same intensity [5]. The pre-processing of PRR signals involves removing the background counts and implementing saturation corrections. Depending on the system it might be essential to consider some additional height-dependent corrections. Moreover, external measurement of temperature (typically from a coincident radiosonde) is required to find the coefficients of the calibration function, which relates the lidar rotational-Raman-intensity measurements to temperature profiles [6]. Here we demonstrate that neural network (NN) algorithms are able to accurately retrieve Raman temperature profiles. Moreover, we present a simple yet effective method to calculate the statistical and model uncertainties of the estimated temperature profiles.
Machine learning methods, specifically, NNs have recently become popular among researchers in different fields from network security to medicine. Some interesting implementations of this technique can be found in [8, 9, 10, 11]. NNs have also been implemented for atmospheric constituent retrievals using different instruments [12, 13, 14, 15, 16]. These algorithms are capable of learning complex relations between input and output data without a need to know the mapping function. Moreover, to train a NN algorithm there is no need to perform any data pre-processing steps needed for the traditional analysis. Although, NNs are powerful predictive algorithms, often they do not quantify the uncertainty of the output. For atmospheric temperature profiles, knowing the level of uncertainty of the output is essential. The uncertainty of the model parameters (weights) needs to be calculated. Quantifying uncertainty in NNs is an area of ongoing research, and many studies have been conducted to address the issue [17, 18, 19, 20, 21].
Two major contributions of this study are to explore the possibility of implementing NNs to estimate the temperature profiles, and to quantify the uncertainties of the estimated temperature profiles. By implementing NNs, we can avoid the data pre-processing tasks which are typically needed in conventional temperature retrievals. Although none of the pre-processing tasks are difficult, they can bear large uncertainties, thus all the independent uncertainty components should be calculated and correctly propagated through the data processing chain [22]. For example, in many lidar systems, the raw profiles from different measurement channels are glued, the merging process is empirical and can be a source of uncertainty hard to be calculated [23]. Also, it is possible to encounter the signal-induced noise (SIN) caused by high photon counts. SIN should be modeled correctly to remove background counts. The Modeling of SIN is yet based on empirical methods and bears uncertainties [24, 25]. Furthermore, estimating some characteristics of the lidar instruments such as the lidar overlap function can be challenging, and mostly is based on empirical methods [26, 27]. Thus, NNs have the potential of providing temperature profiles without the need for data pre-processing steps. Moreover, although NNs have been widely used for retrieving atmospheric constituents, to our knowledge, this is the first attempt to include model uncertainties for each retrieved profile, and a complete profile-based uncertainty budget is provided. Here we use RALMO measurements as input and the traditional retrievals from RALMO as the ground truth to train a Bayesian NN algorithm to retrieve tropospheric temperature profiles and to provide its uncertainty. In Sect. 2, we present a brief description of the instrument. We also discuss the traditional method of calculating temperature profiles. Sect. 3, is a brief description of NN models and methods of quantifying their uncertainties. In Sect. 4, we describe the general architecture of the NN which was built to train the temperature profiles. Sect. 5 includes full descriptions of three case studies and discusses the overall performance of the Bayesian NN algorithm for the entire test data-set. Sect. 6 is a summary of the results, and short discussion on evaluating the implementation of the Bayesian NN and its advantages and shortcoming for retrieving temperature profiles. It also provides a future map toward using NN algorithms in atmospheric studies.
## 2 Ralmo
### Ralmo
The RALMam Lidar for Meteorological Observations (RALMO) is located in Payerne Switzerland (46.81\({}^{\mathrm{n}}\)N, 6.94\({}^{\circ}\)E, 491 m ASL). The lidar was built at the Ecole Polytechnique Federale de Lausanne and is being operated by the
Federal Office of Meteorology and Climatology, hereafter referred to as MeteoSwiss [28]. RALMO has been fully operational since 2008 and has provided nearly continues measurements of the tropospheric temperature since then. In clear atmospheric conditions, with 30 minute integration time, it is capable of reaching to 6-7 km during day and 12 km during night time measurements. The lidar is equipped with a frequency-tripled Nd:YAG laser emitting at 355 nm with an emission energy of about 400 mJ per pulse and the repetition rate of 30 Hz. Using a beam expander, the beam's diameter is expanded to 14 cm which reduces the beam divergence to 0.09 \(\pm\) 0.02 mrad. In the receiving end of RALMO, four high-efficiency reflecting parabolic mirrors with 30 m diameters are used to collect the backscattered signals. The Raman-shifted backscattered signals are received after passing through a two-stage polychromator diffracting process, and then recombined into two groups of \(J_{high}\) and \(J_{low}\) signals. The \(J_{high}\) and \(J_{low}\) signals represent the high and low quantum number lines in the Stokes and anti-Stokes branches respectively. Details on RALMO instrumentation, updates and its characteristics can be found in [29].
### Traditional Temperature Retrievals
The interaction between the emitted light from laser and \(O_{2}\) and \(N_{2}\) molecules in the atmosphere results in a frequency-shifted Raman signal which is back-scattered to the lidars's receiver. The measured backscattered signal at altitude \(z\) over time \(t\) is given by the Raman lidar equation:
\[S(z)=\frac{C}{z^{2}}O(z)n(z)\mathcal{L}_{atm}^{2}(z)\left[\sum_{i=O_{2},N_{2}} \sum_{j_{1}}r(I_{i})\eta_{i}(\frac{d\sigma}{d})^{i}(J_{i})\right]+B \tag{1}\]
where \(C\) is the lidar constant, \(O(z)\) is the geometrical overlap between transmitted laser beam and telescope, \(n(z)\) is the air number density, \(\mathcal{L}_{atm}^{2}(z)\) is the atmospheric round-trip transmission, \(\tau(I_{i})\) is the transmission at the wavelength of each line \(J_{i}\), \(\eta_{i}\) is the volume mixing ratio for each molecule, \((\frac{d\sigma}{d})^{i}(J_{i})\) is the Raman cross section for each \(J_{i}\), and \(B\) is the background counts. The high frequency-shifted and low frequency-shifted signals are described by the lidar equation, where their ratio \(Q(z)=\frac{J_{high}(z)}{J_{high}(z)}\) is equivalent to:
\[Q(z)=\frac{\left[\sum_{i=O_{2},N_{2}}\sum_{j_{1}}r(I_{i})\eta_{i}(\frac{d \sigma}{d})^{i}(J_{i})\right]}{\left[\sum_{i=O_{2},N_{2}}\sum_{j_{1}}r(I_{i}) _{high}\eta_{i}(\frac{d\sigma}{d})^{i}(J_{i})\right]}. \tag{2}\]
In theory, by calculating the differential backscatter cross-section area for the two wavelengths, and inserting the transmission values at each \(J_{i}\), the temperature profile can be calculated. However, in practice, temperature profiles retrieved from a nearby radiosonde is used, and \(Q\) is related to the temperature profile as:
\[T=\frac{A}{B+\ln Q} \tag{3}\]
where \(T\) is the temperature profile and \(A\) and \(B\) are two coefficients which are determined by calibrating \(T\) with respect to coincident radiosonde temperature measurements. It is important to note that in order to retrieve \(T\) correctly, before calculating \(Q\), the signals should be corrected for dead time of the acquisition system and the background counts. A detailed description on the steps towards calculating the temperature profiles is available in [6].
Recently, [29] showed that the temperature profiles retrieved from RALMO data during the period of July 2017 to December 2018 were in an excellent agreement with the two daily co-located reference, radiosonde flights. The mean difference between the traditional and the radiosonde profiles in daytime was found to be \(0.62\pm 0.1\) K and in nighttime was reported to be \(0.66\pm 0.34\) K. Their result indicated that the RALMO temperature profiles retrieved using this procedure had high stability and small statistical uncertainty. Thus, we have used the traditional temperature profiles and their corresponding statistical uncertainty profiles as the ground truth profiles in the study. We have used the same period of lidar measurements and their corresponding retrieved temperatures to train our NN algorithm.
## 3 Neural Networks for Regression
### Neural Networks
NNs are ensemble of nonlinear functions that are trained to infer a statistical relationship between the input and output vectors from a set of training examples. NNs have a layer-wise structure containing simple computational
elements known as nodes. At each layer, nodes are connected to the previous and to the next layer. The connections are weighted. A weight matrix connects one layer to the next layer. Formally, output of layer \(j\) can be written as:
\[\psi_{j}(\sum_{l}^{N}w_{j,i}x_{i}+b_{j})=\psi_{j}(\textbf{W}^{\top}\textbf{x}+b) \tag{4}\]
where N is the number of neurons in the layer, \(\psi\) is a nonlinear function applied to the weighted sum of input vector (**x**) and a weight matrix (**W**). The nonlinear function which is known as activation function is chosen from Sigmoidal functions. Sigmoidal functions are real-valued functions such that:
\[\begin{split}\lim_{x\rightarrow-\infty}\psi(x)=& 0\\ \lim_{x\rightarrow\infty}\psi(x)=& 1.\end{split} \tag{5}\]
The architecture of a NN with an input layer, 2 hidden layers, and an output layer is shown in Fig. 1. In the context of lidar measurements, the input data is a vector containing photon counts at each altitude and for each channel. For temperature retrievals, an output layer is a vector containing the temperature at each altitude. Layers between the input and output layers are called hidden layers, and they represent the depth of NN.
During the training process, the weights and biases are tuned to minimize a cost function. For regression, normally mean squared error between the target vector **y** and the prediction \(\hat{\textbf{y}}\) is chosen:
\[L_{\textbf{w}}(\textbf{w},\textbf{x},\textbf{y})=\frac{1}{N}\sum_{i}(y_{i}- \hat{y}_{i})^{2}. \tag{6}\]
The cost function is numerically minimized with respect to the parameters. Most optimization algorithms are similar to the gradient descent with the following update rule:
\[w_{i,t+1}=w_{i,t}-\tau\frac{\partial L}{\partial w_{i,t}} \tag{7}\]
where \(t\) is the current time and \(\tau\) is a learning rate that defines the size of update, and is set by the user [30, 31]. In practice, in each time step, instead of computing the gradient of the whole data-set, the gradient of a subset of data is being calculated; the later method is known as stochastic gradient descent. Adam is another variant of the gradient descent in which a moving average of the gradient mean and variance is stored. This approach leads into more effective weight updates [32].
### Quantifying Uncertainty in Neural Networks
As mentioned in Section 3.1, the optimized target function is the result of optimized weights. Thus, to calculate the effect of model uncertainty on target functions, the uncertainty of weights should be calculated. Quantifying the
Figure 1: The architecture of an NN with an input layer, 2 hidden layers, and an output layer. The layers are connected via elements of the weight matrix **W**. The input and the output do not need to be the same grid.
uncertainty of NNs' predictions for physical systems is essential. One major approach to adopt uncertainties for NN models is to use Bayesian formalism in which parameters (weights) of a NN are set to follow _apriori_ Normal distribution, then the _posterior_ distribution over the parameters is computed that yields in estimating model uncertainty. Using Bayes' theorem, given a training data-set, the _posterior_ distribution for the space of parameters is written as [33, 34]:
\[P(\textbf{W}|\textbf{x},\textbf{y})=\frac{P(\textbf{W})P(\textbf{y}|\textbf{W}, \textbf{x})}{P(\textbf{y}|\textbf{x})} \tag{8}\]
where \(P(\textbf{y}|\textbf{x})\) is calculated as:
\[P(\textbf{y}|\textbf{x})=\int P(\textbf{y}|\textbf{x},\textbf{w})P(\textbf{w} )d\textbf{w} \tag{9}\]
Performing the integration in Eq.9 is called marginalising the likelihood over the parameter. Using Eq.8 we can predict an output for an unseen input data point \(\textbf{x}^{\star}\):
\[P(\textbf{y}*|\textbf{x}*,\textbf{x},\textbf{y})=\int P(w|\textbf{x},\textbf{ y})P(\textbf{y}*|\textbf{x}*,w)d\textit{w} \tag{10}\]
The integral of Eq. 10 is called Inference; for most models, the marginalising cannot be done analytically, and an approximation is needed. A verify of methods have been developed to approximate the Bayesian inference among which Markov chains Monte Carlo (MCMC) methods [35] and its variants such as Langevin diffusion methods and Hamiltonian methods are well-known [36, 37, 38]. Recently, variational Bayesian methods have gain interest as well [18]. Although, these approximations are appealing, computationally they are very expensive.
Alternative to the Bayesian approach is ensemble methods in which the estimate of multiple individual models will be aggregated. Randomization-based and boosting-based approaches are the main branches of ensemble methods. In randomization-based models each ensemble can be trained individually and independent of other ensembles. The variance of the ensembles' predictions is interpreted as the uncertainty of the prediction. Recently, [20] used the idea of ensemble for NNs. In their approach, they trained multiple individual NNs such that for each NN parameters are initialized randomly and data points are shuffled and randomly sampled. The model uncertainty was obtained by averaging predictions over multiple NN models.
Recently, [21] used Randomized maximum a _posterior_ sampling (RMS) method that combines the Bayesian and ensemble methods. In the mentioned approach, by adding a regularization term to the cost function a maximum a _posterior_ (MAP) of the parameters is estimated that is the point estimate of Bayesian posterior. Injecting noise to either terms of the cost function and sampling repeatedly will produce a distribution of solutions which can be shown is a good estimation of the true _posterior_ distribution [39]. In practice, the mean of the distribution is used as the optimum parameter state and its standard deviation indicates the uncertainty of the estimation.
More formally, given a prior Normal distribution for the parameters \(P(\textbf{W})=N(\mu_{prior},\Sigma_{prior})\) the MAP can be estimated as:
\[\textbf{W}_{MAP}=arg\max_{\textbf{x}}\textbf{W}(\log P(\textbf{x},\textbf{y}| \textbf{W}))-\frac{1}{2}||\sum_{prior}{}^{-\frac{1}{2}}(\textbf{W}-\mu_{prior })||^{2}. \tag{11}\]
In the absence of \(\mu_{prior}\), the equation becomes the standard L2 regularization. In the RMS method \(\mu_{prior}\) is a random draw from \(N(\mu_{prior},\Sigma_{prior})\). Thus the cost function for regression for each net can be written as:
\[Cost_{j}=\frac{1}{N}||\frac{(\textbf{y}-\hat{\textbf{y}})^{2}}{\sigma_{j}^{2}}|| +\frac{1}{N}||\mathcal{H}^{\frac{1}{2}}(\textbf{w}_{j}-\textbf{w}_{RMS,j})||^{2} \tag{12}\]
where the uncertainty of the ground truth is \(\sigma_{j}^{2}\), and the diagonal of \(\mathcal{H}\) is defined as \(\frac{1}{\sigma_{prior}^{2}}\) and \(j\) indicates number assigned to each net.
We modified the algorithms to accommodate the effect of input (raw photon counts) uncertainty on the weight optimization procedure. At relatively modest count rates (e.g. \(\sim\)20 or less), the distribution of uncertainties for lidar measurements tends to be Normal. Thus, it is possible to add small Gaussian perturbations into the measurements in each iteration. During the training, we sample repeatedly from both input data (photon counts) and the weight spaces. In this approach, we can capture the effect of noise on the estimation of weight parameters. We also repeated the process by implementing a more accurate assumption that photon counts follow the Poisson distribution [40], the difference between the results was not significant. Adding Gaussian noise into the input data also has the benefit of acting as a regularization term (Tikhonov regularizer), helping to avoid overfitting [33]. Moreover, similar to [19] approach at test time, a small noise was added to the measurements, for multiple times, and for each ensemble, then the temperature estimation was calculated. For each measurement, the averaged profile was used as the final estimation and its standard estimation was used as the uncertainty of the estimation.
We will show that the precision of our retrievals is limited by the precision of our ground truth. In the case of having an optimal model with the minimal model uncertainty, the estimation of the temperature profile will be similar to the ground truth; however, the uncertainty of the ground truth should be added to the total uncertainty of the estimations. In the traditional method, the statistical uncertainty of each profile is calculated [29]. Thus, we are able to simultaneously estimate both temperature profiles and their corresponding uncertainty and produce two outputs: the estimated profile and the estimated uncertainty of the ground truth. The final variance estimation can be written as follows [41]:
\[\frac{1}{m}\sum_{j=1}\hat{\sigma_{j}}^{2}+\frac{1}{m}\sum_{j=1}\hat{\mathbf{y} }^{2}-(\frac{1}{m}\sum_{j=1}\hat{\mathbf{y}}_{j})^{2}, \tag{13}\]
where \(m\) is the number of ensembles and \(\hat{\sigma}^{2}_{j}\) is the estimated variance of the ground truth uncertainty for each profile.
Of critical importance is that, the estimated uncertainty of the ground truth is inherited from data and cannot be minimized. However, the model uncertainty can be minimized as more data is added to the training set.
## 4 Description of the data and model
A total of 4510 temperature profiles from RALMO measurements between July 2017 and August 2018 are used in this study. We divided our data into training (70 % of total), validation (10% of total) and test sets (20% of total). Of note, during the mentioned period only 8167 measurements were recorded. That is because for many hours (or even days) measurements were halted due to weather conditions. Among the available measurements, only raw data corresponding to temperature profiles with valid values in the height range from 0.7 km to 6 km was selected. This corresponds to 4510 measurement and temperature profiles. Thus, many temperature profiles that did not reach the pre-defined height were discarded. In the mentioned period (July 2017 to August 2018), if the lidar was fully operational, we would have more than 15000 raw profiles. Considering the weather conditions, as well as our criteria of selecting profiles, we had 4510 profiles (30% of possible measurements). Thus, the dataset was already extremely sparse in time. However, to ensure that the training and test data were not correlated in time, we made sure that the test data were not selected from the same days (and nights) that we selected the training set.
The dimensions of the input layer corresponding to photon counts from \(J_{high}\) and \(J_{low}\) channels was 4688, and the output layer corresponding to the temperature profiles had 175 neurons. To improve the optimization process in NN models, it is often necessary to ensure that input data have similar ranges of values, thus the data should be scaled. Hence, each input measurement profile is standardized using the following equation:
\[x_{standard}=\frac{x-\mu}{\sigma}, \tag{14}\]
where \(\mu\) is the mean and \(\sigma\) is the standard deviation of a profile. As mentioned earlier, we used temperature profiles from the traditional retrievals as the ground truth. During the period considered the lidar was calibrated 7 times, with only small differences between the calibration values. For more extended periods of measurements, or in conditions when the calibration constants vary drastically, it is possible to use \(Q\) values as the ground truth and train the model to estimate the \(Q\) values. Then, using Eq. 3 the temperature profile can be calculated.
To build a RMS algorithm, we modified source codes developed by [21]. Our RMS model is trained using the Keras API in Python. The NN models have several hyperparameters which are not determined from the data during the training. The hyperparameter tuning is performed separately. The number of hidden layers, number of neurons per layer and choice of activation functions are important hyperparameters for a network. We trained our model multiple times using a set of random selections from different choices of activation functions, as well as different numbers of hidden layers and neurons. Considering both performance and the time needed to train the model, we constructed our final network using 4 hidden layers with each layer containing 250 neurons. A "tanh" activation function is used to connect the hidden layers. Using denser NNs was computationally expensive, and did not improve the estimations. Estimating the error for all training samples leads to a slow convergence. Thus we used a mini-batch approach at each epoch. The batch size was another hyperparameter, and batch size of 128 was selected. As the mini-batch approach can lead to more noisy estimates, we used an Adam algorithm, which is a well suited optimization algorithm for noisy data [32]. We also used a stochastic gradient algorithm which had a lower performance compared to Adam.
## 5 Results
The trained NN model based on an ensemble of 100 networks was used to estimate the temperature profiles of all 910 samples in the test data-set. In this study, the ground truth temperature profiles represent a 30 minutes
integration time and have a vertical resolution of 30 m in a height range from 0.7 km to 6 km. The retrieved profiles were compared against the ground truth temperature profiles. Fig. 2 (left panel) shows the corresponding mean bias error (MBE) which was calculated as: \(\frac{1}{n}\sum_{i}^{n}(T_{NN_{i}}-T_{truth_{i}})\). The MBE is always smaller than 0.05 K for altitude range of 0.7 km to 4.5 km and the bias of less than 0.08 K for the altitude range of 4.5 km to 6 km. The bias is much smaller than the average of the summation of the statistical uncertainty of the target profiles (\(\frac{1}{N}\sum_{j}\sigma_{j}\), where N is the number of test profiles) that covers the shaded area. We also calculated the root mean squared error (RMSE) for the test period. The result is shown in Fig. 2 (right panel).
An individual NN algorithm is used to retrieve the temperature profiles for both daytime and nighttime measurements, in both clear and cloudy conditions, where the clouds are thin enough for lidar returns to be collected. The NN's estimation as well as the ground truth (traditional) profiles corresponding to 30 min coadded count measurements from 22:30 UT on 23 August 2017, typical for nighttime clear sky, are shown on the left panel of Fig. 3. The cyan shaded area shows the summation of uncertainties of the two profiles. The two profiles are inside (within) the shaded area. To see the difference between the two profiles more clearly, (\(T_{NN}-T_{traditional}\)) is plotted in the right panel of Fig. 3. The difference between the two profiles is within the summation of the uncertainties of two profiles (shown as the shaded gray area).
The total uncertainty of NN is also plotted against the uncertainty of the target profile. The total uncertainty on NN (black dotted curve) is the sum of the estimation of the model uncertainty (cyan curve) and the estimated uncertainty of the ground truth (red curve). The model uncertainty does not exceed 0.4 k at lower altitudes. The estimated uncertainty of the ground truth (red curve) also matches well with the ground truth uncertainty of the ground truth (blue curve).
We then considered a typical daytime measurement in clear sky. The traditional and NN temperature profiles of measurements from 16:30 UT on 6 July 2017 are shown in the left panel of Fig. 5. The difference between the
Figure 3: The temperature estimation on \(23\) August 2017 at 22:30 UT. Left panel: The estimated (NN) temperature profile is shown in red, and the target profile is shown in blue. Right panel: The difference between the estimation and the ground truth profiles are plotted (black curve). The gray shaded areas represents the summation of the uncertainties of NN and target profiles.
Figure 2: Left panel: The average of MBE (red line) is shown. The shaded gray area represents the average of the summation statistical uncertainty of the target profiles. Right panel: RMSE calculation for the temperature profiles on the test data set.
two profiles is within the summation of their uncertainties. The total uncertainty of the NN is also plotted (right panel of Fig. 5). The estimated statistical uncertainty of the NN matches well with the true uncertainty of the traditional profile. The model uncertainty has a low value for most altitudes, however, at about 1.8 km it shows a higher uncertainty of 0.6 K.
In general, in clear conditions, the agreement between the NN estimations and the traditional profiles are within their uncertainties. However, for cloudy cases, the difference between the two profiles can grow larger. Measurements from 19:00 UT on 12 Sep 2017 indicate that at lower altitude (below 3 km) a layer of cloud was present (Fig. 6). The NN's estimation as well as the traditional temperature profiles are in good agreement in lower altitudes. At about 3 km their difference grows larger and at about 4.5 km the two profiles once again become similar. Left panel in Fig. 7 shows the retrieved NN temperature in red and the traditional profile in blue, also the middle panel, plots the difference between the two profiles. The model and statistical uncertainties of NN profile compared to the uncertainty of NNs in the previous cases (clear night and day) are much larger. However, below 3 km ( in the altitude range where the agreement between the NN and the traditional profile is good) the uncertainty of NN is smaller than 0.9 K for most altitudes (Fig. 7 right panel).
To further illustrate our method we consider 3 interesting cases with moderate to large temperature inversions. Fig. 8 shows more examples of estimated NN temperatures plotted against their corresponding traditional temperature profiles. The left panel shows the result for measurements from 2:30 UT on 5 December 2017. The traditional temperature profile shows significant variability, and the NN estimation can successfully capture these temperature changes. The middle panel represents temperature profiles from 11:30 UT on 26 October 2017. Similar to the previous example, the NN estimation is in good agreement with the traditional profile. The right panel shows results for measurements from 19:00 UT on 3 November 2017. In this night, a thin layer of clouds was presented at higher altitudes (above 4.5 km).
Figure 4: The statistical uncertainty of NN’s retrieval (red curve) is plotted against the statistical uncertainty of the target profile (blue curve). The model uncertainty is shown in blue, and the total uncertainty is shown with black dotted curve.
Figure 5: The temperature estimation on 6 July 2017 at 16:30 UT. Left panel: The NN estimation (red curve) is plotted against the traditional profile (blue curve). Middle panel: The difference between the estimation and the ground truth profiles are plotted (black curve). Right panel: The statistical uncertainty of NN’s retrieval (red curve) is plotted against the statistical uncertainty of the target profile (blue curve). The model uncertainty is shown in blue, and the total uncertainty is shown with black dotted curve.
## 6 Summary and Conclusions
Our study shows proof-of-concept that NNs are capable of estimating the ground truth temperature profiles. We have shown that a single model can be trained to estimate both daytime and nighttime temperature profiles in various atmospheric conditions. We trained our model using 3600 measurement and their corresponding temperature profiles. We evaluated our model using 910 unseen measurements. The corresponding MBE exhibited a bias of less than 0.05 K in the range of estimation. We showed that the trained model is capable of estimating temperature profiles for different atmospheric conditions (clear and cloudy). However, for cloudy conditions, at some altitudes, the difference between the NN model and the traditional could grew larger than the clean conditioned estimations. Details of retrieved profiles for a few case studies were also provided.
Figure 8: Three examples of the NN method successfully retrieving temperature when inversions are present. Left panel: NN and traditional temperature profiles for measurements from 2:30 UT on 5 December 2017. Middle panel: NN and traditional temperature profiles for measurements from 11:30 UT on 26 October 2017. Right panel: NN and traditional temperature profiles for measurements from 19:00 UT on 3 November 2017.
Figure 6: measurements from 19:00 UT on 12 Sep 2017; cloudy night.
Figure 7: The temperature estimation on 6 July 2017 at 19:00 UT. Left plot: The NN estimation (red curve) is plotted against the traditional profile (blue curve). Middle panel: The difference between the estimation and the ground truth profiles are plotted (black curve). Right panel: The statistical uncertainty of NN’s retrieval (red curve) is plotted against the statistical uncertainty of the target profile (blue curve). The model uncertainty is shown in blue, and the total uncertainty is shown with the black dotted curve.
Another major focus of the current study was to provide a reliable uncertainty budget for retrieved profiles. One of the ongoing issues with deep learning models is that they are overconfident, meaning that in the models' estimations little or no uncertainty is considered. However, in reality, the optimized weights are uncertain and their uncertainty can highly affect the estimation of the models. Here, we approached the problem by training an ensemble of networks each of which trained to estimate a MAP estimation. The distribution of the solutions is a good estimate of the true distribution. Thus, we could use the temperature profiles and produce reliable model uncertainties. In general, for clear atmospheric conditions, the NN's retrieval is within the uncertainty of the traditional method. The estimated statistical uncertainty of the NN matches well with the ground truth uncertainty, and the model uncertainty is small. It is important to mention, that unlike the inherited uncertainty from the measurements, the model uncertainty can be minimized by training the model on larger data-sets. For example, as the cloudy conditions are more difficult cases to learn by the algorithm, the model uncertainty shows larger uncertainties. Thus training the model with more data can result in better estimations of the model uncertainty. Hence, one future step is to train the model with a larger data-set to investigate if the temperature profiles from the cloudy measurements can become closer to the ground truth. Moreover, in this study, we trained a general model for day-time, night-time, and cloudy conditions. Granting a larger data-set, it is interesting to build three condition-dependent models and compare the outputs with the output of the general model. Another emerging field of research is multitask learning. Implementing this method, the input of the network contains measurements corresponding to different conditions, and the first few layers of the network are shared between all types of measurements, but the last few layers are trained specifically for each of the mentioned conditions [42]. The comparison of the accuracy of the estimated profiles from training the NNs based on the general, condition-dependent and multitask models will help us to determine the best model for our application.
Typically, NNs are data-driven and require minimal data processing. For example, there is no need to use any additional data preprocessing such as correcting the deadtime effect, calculating the overlap function and etc. However, in this study, we have used the retrieved RALMO temperature profile as the ground truth. Although, the NN algorithm is free of data processing steps, to provide the ground truth extensive data processing is required. Thus, using RALMO temperature profiles as the ground truth is one of the limitations of the present study. Hence, one of the major future steps is to use radiosonde temperature profiles as the ground truth such that the estimated profiles become independent of RALMO historical temperature profiles.
We are also planning to explore the possibility of training a Bayesian NN from other rotational Raman lidars. This way we can have a single trained model capable of estimating temperature profiles from different lidars. One advantage of developing such an algorithm is the ease of comparing temperature profiles from different locations. After the initial training process, implementing the model on new unseen data is quite fast, estimating temperature profiles, even on an old laptop, takes only a few seconds. Considering the reliable result we achieved for the temperature profiles, we are interested to use the NN algorithm to estimate the water vapor profiles from RALMO.
|
2310.15148 | Physics informed neural networks learning a two-qubit Hamiltonian | Machine learning techniques are employed to perform the full characterization
of a quantum system. The particular artificial intelligence technique used to
learn the Hamiltonian is called physics informed neural network (PINN). The
idea behind PINN is the universal approximation theorem, which claims that any
function can be approximate by a neural network if it contains enough
complexity. Consequently, a neural network can be a solution of a physical
model. Moreover, by means of extra data provided by the user, intrinsic
physical parameters can be extracted from the approach called inverse-PINN.
Here, we apply inverse-PINN with the goal of extracting all the physical
parameters that constitutes a two qubit Hamiltonian. We find that this approach
is very efficient. To probe the robustness of the inverse-PINN to learn the
Hamiltonian of a two-qubit system, we use the IBM quantum computers as
experimental platforms to obtain the data that is plugged in the PINN. We found
that our method is able to predict the two-qubit parameters with 5% of accuracy
on average. | Leonardo K. Castelano, Iann Cunha, Fabricio S. Luiz, Marcelo V. de Souza Prado, Felipe F. Fanchini | 2023-10-23T17:52:58Z | http://arxiv.org/abs/2310.15148v1 | # Physics informed neural networks learning a two-qubit Hamiltonian
###### Abstract
Machine learning techniques are employed to perform the full characterization of a quantum system. The particular artificial intelligence technique used to learn the Hamiltonian is called physics informed neural network (PINN). The idea behind PINN is the universal approximation theorem, which claims that any function can be approximate by a neural network if it contains enough complexity. Consequently, a neural network can be a solution of a physical model. Moreover, by means of extra data provided by the user, intrinsic physical parameters can be extracted from the approach called inverse-PINN. Here, we apply inverse-PINN with the goal of extracting all the physical parameters that constitutes a two qubit Hamiltonian. We find that this approach is very efficient. To probe the robustness of the inverse-PINN to learn the Hamiltonian of a two-qubit system, we use the IBM quantum computers as experimental platforms to obtain the data that is plugged in the PINN. We found that our method is able to predict the two-qubit parameters with 5% of accuracy on average.
## I Introduction
A significant amount of effort has been invested in the development of new technologies aimed at advancing the field of quantum computing due to its potential to revolutionize digital computing. Precise knowledge of the quantum system, described by a Hamiltonian containing both local and non-local terms, is essential for performing quantum operations and subsequently implementing quantum algorithms. Therefore, the extraction of this information from experimental data becomes a crucial task. Interest in the application of artificial intelligence (AI) to address scientific challenges has been rapidly growing [1; 2; 3; 4].The use of AI in physics began with the analysis of particle physics experiments [5; 6; 7]. Other applications include the application of machine learning (ML) in condensed matter systems [8; 9], exploring the AdS/CFT correspondence [10], and phase transition determination [11].
More recently, the concept of physics-informed neural network (PINN) has been introduced, where differential equations describing the physics of the problem are introduced in the training of the neural network [12; 13; 14]. This approach offers the advantage of reducing the amount of training data required, as the neural network is constrained to satisfy the differential equations that adhere to physical laws [12; 13; 14]. Furthermore, the idea of learning and extracting information from a collection of data can also be implemented through the inverse-PINN [12; 13; 14]. In this case, data is provided along with the equations and physical parameters can be extracts from the model.
Several approaches to learn a Hamiltonian have been proposed in the past. Some rely on the tomography of the density matrix [15; 16; 17], while other protocols focus on utilizing states that inherently encode information about the Hamiltonian, such as steady states and thermal states [18; 19].Various learning proposals have been put forward that eliminate the need for preparing specialized initial states [20]. For instance, the parameters of the Hamiltonian can be derived from the Ehrenfest theorem [21] or from measuring state properties through their evolution to identify the nearest-neighbor coupled Hamiltonian in superconducting systems [22]. Yu at al. [23] proposed a method that utilizes short-time Hamiltonian evolution and exploits ideas from randomized benchmarking [24].
In this paper, we apply the technique of inverse-PINN to perform the Hamiltonian tomography for a two-qubit system. Experimental data at specific points is also required to extract these parameters. As is well-known, measurements in quantum mechanics are probabilistic and necessitate repeated preparation of the initial configuration for statistical measurement. If measurements are performed as a function of time, statistical measurements must be conducted for each time step. Therefore, probing all observables as a function of time requires a significant number of repetitions of the experiment. To address this issue, we investigate the accuracy of inverse-PINN as a function of the number of collocation points for density matrix tomography measurements. Here, collocation points refer to the points in the time domain where density matrix tomography is performed. We found that inverse-PINN can accurately provide the coupling between the qubits with a reduced number of collocation points. We also incorporate errors into our theoretical model for density matrix tomography to estimate the robustness of the predicted parameters. Finally, we implemented an experiment on IBM quantum computers to address a real-world problem, and we successfully learned the two-qubit Hamiltonian with an error of less than 5% on average, using only 20 collocation data points.
Theoretical model
The general Hamiltonian for two-qubit can be written as
\[H=-\frac{\hbar}{2}\sum_{l=0}^{3}\sum_{k=0}^{3}J_{k,l}\sigma_{k}\otimes\sigma_{l}, \tag{1}\]
where \(\sigma_{k}\) denote the corresponding Pauli matrix for \(k=1,2,3\), and \(\sigma_{0}\) is the identity matrix. There are fifteen \(J_{k,l}\) terms that describe local and non-local interactions between the two-qubit. The term \(J_{0,0}\) only provides a reference for the energy and we set it equal to zero. The expected value for the corresponding observable can be calculated from
\[\langle\sigma_{k}\sigma_{l}\rangle(t)=\text{Tr}[\rho(t)\sigma_{k}\otimes\sigma _{l}], \tag{2}\]
where \(k,l\in\{0,3\}\) and \(\rho(t)\) is the density matrix for the two-qubit at time \(t\). Moreover, the dynamics of the observables must be included in the inverse-PINN, thus we use the Heisenberg equation
\[\frac{d\langle\hat{O}_{m}\rangle(t)}{dt}=\frac{i}{\hbar}\langle[H,\hat{O}_{m}]\rangle. \tag{3}\]
The above equation for \(\langle\hat{O}_{m}\rangle\), must be implemented for all 15 physical terms corresponding to all \(\sigma_{k}\sigma_{l}\) different from identity. In this case, we are let to 15 coupled ordinary differential equations that must be solved for an initial condition to obtain the desired parameters.
The idea behind PINN is to model the solutions of the differential equations by a neural network and to minimize the loss function. Particularly, the loss function can be written as
\[L=L_{model}+L_{data}, \tag{4}\]
where
\[L_{model}=\sum_{m=1}^{15}\sum_{j=1}\left|\left(\frac{d\langle\hat{O}_{m} \rangle}{dt}-\frac{i}{\hbar}\langle[H,\hat{O}_{m}]\rangle\right)\right|_{t_{j }}\right|^{2}. \tag{5}\]
The loss function associated to the model \(L_{model}\) provides the values of \(\langle\hat{O}_{m}\rangle\) at the collocation points \(t_{j}\), which are mapped onto the NN, consequently the solutions of the differential equations are represented by the NN. The loss function \(L_{data}\) is related to the data extracted from the experiment. This loss function impose the constraint for the solutions of the differential equations fit the experimental data in the collocation points. In this sense, the inverse-PINN forces the NN to be both the solutions of the differential equations and to represent the experimental data, thereby performing the Hamiltonian tomography and extracting the physical parameters \(J_{k,l}\).
## III Results
We start by analyzing two simpler models, the \(H_{Z}\) and the \(H_{XYZ}\) Hamiltonians. The first Hamiltonian is
\[H_{Z}=-\frac{\hbar}{2}\left(J_{0,3}\sigma_{0}\otimes\sigma_{3}+J_{3,0}\sigma_{ 3}\otimes\sigma_{0}+J_{3,3}\sigma_{3}\otimes\sigma_{3}\right), \tag{6}\]
which has been used to fit experimental data related to quantum dots [25]. The second Hamiltonian is the XYZ model without local terms, thus
\[H_{XYZ}=-\frac{\hbar}{2}\left(J_{1,1}\sigma_{1}\otimes\sigma_{1}+J_{2,2} \sigma_{2}\otimes\sigma_{2}+J_{3,3}\sigma_{3}\otimes\sigma_{3}\right). \tag{7}\]
Both cases have the advantage of decoupling some differential equations, thus there are only four coupled ode's for \(J_{0,3}\) and \(J_{3,3}\) (\(J_{1,1}\) and \(J_{3,3}\)). Symmetric equations for \(J_{3,0}\) and \(J_{3,3}\) (\(J_{2,2}\) and \(J_{3,3}\)) complete the set of ode's. Because these equations are symmetric, we can only solve a set of four ode's to study the results for these cases. First, we make an analysis concerned on the number of points of experimental data needed. To perform such a task, we provide data without errors by numerically solving equations 3 considering the values of the parameters \(J_{k,l}\) randomly sorted between \([-\omega_{0},\omega_{0}]\), where \(\omega_{0}=2\pi/T\) and \(T\) is the final time of evolution. The first analysis that we perform concerns on accuracy of the extracted parameter versus the number of collocations points. The accuracy is measured by the mean absolute error, which is defined as follows:
\[MAE=\frac{1}{D}\sum_{i=1}^{D}\frac{|P_{i}^{exact}-P_{i}^{pred}|}{|P_{i}^{ exact}|}, \tag{8}\]
where \(D\) is the number of physical parameters, \(P_{i}^{exact}\) (\(P_{i}^{pred}\)) denotes the i-th exact (predicted) physical pa
Figure 1: MAE for the \(H_{Z}\) Hamiltonian (top panel) and for the \(H_{XYZ}\) Hamiltonian (bottom panel) as a function of the number of collocation points.
rameter. In Figure 3, we plot the MAE as a function of the number of collocations points in time for both Hamiltonians \(H_{Z}\) (top panel) and \(H_{XYZ}\) (bottom panel). We use \(D=50\) in both cases and we plot the results using the boxplot method to show the spread of the data. Thus, the lowest (highest) marker shows the lowest (highest) data point in the data set excluding any outliers, which are data points that differ significantly from other observations. The lowest (highest) marker of the blue box is related to the lower (higher) quartile and the marker inside the blue box indicates the median. Figure 3 demonstrates that MAE is lower than 0.4% for only 5 collocation points, which is a evidence for performing the Hamiltonian tomography with very good accuracy.
To further analyze the performance of the Hamiltonian tomography, we add a random Gaussian error in the parameters, which is characterized by the standard deviation \(\sigma\). In this case, parameters \(J_{k,l}\) in equation (1) are modified according to \(J_{k,l}\to J_{k,l}^{0}+E_{k,l}\), where \(E_{k,l}\) is a random variable with Gaussian distribution characterized by \(\sigma\). The idea is to generate the input data for the inverse-PINN considering the error \(E_{k,l}\) and try to predict the value \(J_{k,l}^{0}\) without error.
In Figure 2, we plot the MAE as a function of the standard deviation \(\sigma\) related to the Gaussian distribution of the errors for both Hamiltonians \(H_{Z}\) (top panel) and \(H_{XYZ}\) (bottom panel) considering only \(N=5\) collocation points. In this case, we notice that the Hamiltonian tomography has at least 5% accuracy if the standard deviation is less than 1%. On the contrary, if the error has a 10% standard deviation, the accuracy measured by MAE is very poor when using only \(N=5\) collocation points. We repeated the same analysis by increasing the number of collocation numbers (results not shown here) and we found that it is possible to improve the Hamiltonian accuracy by increasing the number of collocations number, _e.g._, we can get MAE \(<7\%\) for \(\sigma=10\%\) when \(N=100\). After checking that the inverse-PINN provides a good accuracy for the tomography of \(H_{Z}\) and \(H_{XYZ}\), we repeat the same analysis for the general two qubit Hamiltonian (Eq. 1), whose results are shown in Figure 3. In the top panel of Figure 3, we plot MAE versus collocation points for the input data provided without errors. We can conclude that at least N=10 collocation points are necessary for an accurate prediction of the physical parameters of the general Hamiltonian (Eq. 1). In the bottom panel of Figure 3, we consider N=20 collocation points and we include the random Gaussian error in the input data. In this case, we can see that the MAE is less than 3% for a standard deviation of 1%, which is very similar to the results found for \(H_{Z}\) and \(H_{XYZ}\) Hamiltonians. This result demonstrates that the inverse-PINN can indeed learn the general Hamiltonian with good accuracy considering at least N=20 collocation points, even though errors in the measurements are present. As a final test, we simulate the \(H_{Z}\) the \(H_{XYZ}\) Hamiltonians in the IBM quantum computers. We use these two Hamiltonians because of their easy representation in terms of quantum gates, which do not require any Trotter approximation. In Figure 3, we plot the MAE for both \(H_{Z}\) and \(H_{XYZ}\) Hamiltonians as a function of the number of collocation points. The difference of these results from the previous ones is related to the input data, which was experimentally obtained from the IBM quantum computers. For the \(H_{Z}\) Hamiltonian (top panel of Figure 3), we find that the MAE only achieves values less than 5% for 20 collocation points. On the other hand, the results in the bottom panel of Figure 3 for the \(H_{XYZ}\) Hamiltonian, show that the MAE is already less than 5% on average for only 5 collocation points. We believe that this differ
Figure 3: MAE as a function of the number of collocation points (top panel). MAE as a function of the standard deviation for N=20 collocation points (bottom panel). Both panels are related to the general two qubit Hamiltonian and are plotted in logarithm scale.
Figure 2: MAE for the \(H_{Z}\) Hamiltonian (top panel) and for the \(H_{XYZ}\) Hamiltonian (bottom panel) as a function of the standard deviation for N=5 collocation points.
ence might be due to errors introduced in the realization of the quantum gates in the IBM quantum computer.
## IV Conclusion
We applied a technique called inverse-PINN, which is based on neural networks, to extract the parameters of a two-qubit Hamiltonian. First, we tested two Hamiltonians of interest, one containing only terms in the z-direction and the so called XYZ model without local terms. We found that the inverse-PINN is able to find the parameters of both Hamiltonians with great accuracy, when the input data is provided without errors. When we include errors, by considering a Gaussian dispersion in the parameters that generate the input data, we still are able to estimate the real parameters with 5% accuracy, if we use at least 20 collocation points. We also simulate a full two-qubit Hamiltonian, where there are 15 parameters to be learned and the results for the general case are similar to those found for the particular Hamiltonians. Furthermore, we use the input data obtained from the IBM quantum computers for the \(H_{Z}\) and \(H_{XYZ}\) Hamiltonians as an real scenario for our approach. In this case, we also found results with very good accuracy, similar to those found using the results with Gaussian dispersion. We believe that this approach is very powerful and can be used in other important tasks related to the improvement of quantum computation platforms.
###### Acknowledgements.
The authors are grateful for financial support by the Brazilian Agencies FAPESP, CNPq and CAPES. LKC thanks to the Brazilian Agencies FAPESP (grant 2019/09624-3) for supporting this research.
|
2310.09561 | Graph Neural Network approaches for single-cell data: A recent overview | Graph Neural Networks (GNN) are reshaping our understanding of biomedicine
and diseases by revealing the deep connections among genes and cells. As both
algorithmic and biomedical technologies have advanced significantly, we're
entering a transformative phase of personalized medicine. While pioneering
tools like Graph Attention Networks (GAT) and Graph Convolutional Neural
Networks (Graph CNN) are advancing graph-based learning, the rise of
single-cell sequencing techniques is reshaping our insights on cellular
diversity and function. Numerous studies have combined GNNs with single-cell
data, showing promising results. In this work, we highlight the GNN
methodologies tailored for single-cell data over the recent years. We outline
the diverse range of graph deep learning architectures that center on GAT
methodologies. Furthermore, we underscore the several objectives of GNN
strategies in single-cell data contexts, ranging from cell-type annotation,
data integration and imputation, gene regulatory network reconstruction,
clustering and many others. This review anticipates a future where GNNs become
central to single-cell analysis efforts, particularly as vast omics datasets
are continuously generated and the interconnectedness of cells and genes
enhances our depth of knowledge in biomedicine. | Konstantinos Lazaros, Dimitris E. Koumadorakis, Panagiotis Vlamos, Aristidis G. Vrahatis | 2023-10-14T11:09:17Z | http://arxiv.org/abs/2310.09561v1 | # Graph Neural Network approaches for single-cell data: A recent overview.
###### Abstract
Graph Neural Networks (GNN) are reshaping our understanding of biomedicine and diseases by revealing the deep connections among genes and cells. As both algorithmic and biomedical technologies have advanced significantly, we're entering a transformative phase of personalized medicine. While pioneering tools like Graph Attention Networks (GAT) and Graph Convolutional Neural Networks (Graph CNN) are advancing graph-based learning, the rise of single-cell sequencing techniques is reshaping our insights on cellular diversity and function. Numerous studies have combined GNNs with single-cell data, showing promising results. In this work, we highlight the GNN methodologies tailored for single-cell data over the recent years. We outline the diverse range of graph deep learning architectures that center on GAT methodologies. Furthermore, we underscore the several objectives of GNN strategies in single-cell data contexts, ranging from cell-type annotation, data integration and imputation, gene regulatory network reconstruction, clustering and many others. This review anticipates a future where GNNs become central to single-cell analysis efforts, particularly as vast omics datasets are continuously generated and the interconnectedness of cells and genes enhances our depth of knowledge in biomedicine.
**Keywords: scRNA-seq, spatial transcriptomics, graph neural networks, graph attention**
## 1 Introduction
Single-cell sequencing, is a cutting-edge next-generation sequencing technique, that enables intricate analysis of individual cell genomes or transcriptomes, illuminating
heterogeneity in cellular populations. Unlike traditional sequencing methods which present an averaged view of numerous cells, thereby obscuring finer distinctions and nuances, single-cell sequencing zeroes in on the unique genomic and transcriptomic expressions of each cell. This unparalleled precision exposes the extent and variability of gene expression within specific cell populations, spotlighting the vast behavioral, functional, and structural diversities--collectively termed heterogeneity--originating from varying gene expression patterns [1].
Complementing this, recent innovations in next-generation sequencing and imaging have birthed the technique of spatial transcriptomics. This powerful method systematically maps gene expression across tissue spaces, a pivotal advance that has garnered significant insights in fields such as neuroscience, developmental biology, plant biology, and oncology. Notably, the clinicians recognize the immense diagnostic value of spatial organization within tissue--often discerned through histopathology--as many diseases manifest as aberrations in this spatial matrix [2].
Network medicine serves as an advanced, unbiased platform, assimilating multiomics data from diverse biological levels to elevate the precision and therapeutic approach for types of disease (such as cardiovascular diseases) [3]. The inherent strength of such multi-omics network medicine strategies is manifested in its capability to pinpoint, and dissect the heterogeneity inherent in many compounded biological phenomena. This nuanced understanding not only highlights the intricacies of complex conditions but also steers the direction of customized drug therapies, setting the stage for a new era of precision medicine. In this context, biological systems can be conceptualized as networks of nodes and edges. The nodes represent diverse entities, ranging from genes, proteins, and metabolites among others, while the edges represent their interactions [4]. Given the rapid expansion and heterogeneity of multi-omics data, there's been a corresponding surge in the development of accurate biological networks as well as reliable tools and methodologies that will be used for effective and information-rich analyses.
Graph theory, is a profound mathematical discipline focusing on the study of graphs, that has ascended to prominence as an indispensable tool for analyzing intricate systems and their relations [5]. Within this context, a graph is depicted as an ensemble of nodes (or vertices) interconnected by edges, representing the relationships between entities. Such a representational system facilitates the interpretation of complex networks, enabling profound insights into their inherent structures and recurring patterns. The efficacy of machine learning approaches is contingent not only upon the meticulous design of the encompassing algorithms but also on the quality and fidelity of data representation. Suboptimal representations, either devoid of pivotal details or plagued by superfluous or erroneous information, could undermine the algorithm's operational efficiency across various tasks. Representation learning endeavors to filter data in order to capture sufficient and at the same time minimal information. Recently, network representation learning (NRL) has piqued considerable research interest. The goal of NRL is to derive latent, dimensionally reduced representations of network vertices, whilst preserving the network's topological integrity, vertex attributes, and
other information. Once these transformed vertex representations are procured, ensuing network analytical tasks can be easily executed, relying on traditional vector-based machine learning frameworks within this transformed latent space.
Deep learning has become a staple within the fields of artificial intelligence and machine learning, demonstrating unparalleled efficacy in domains such as image processing, and natural language processing. While graphs can undeniably be applied to myriad real-world scenarios, harnessing deep learning for graph data remains a complex endeavor. This complexity stems from various factors such as the inherent non-uniformity of graph structures, contrasting with the regularity of grid structured data, such as images or audio. For instance establishing operations like convolution and pooling, quintessential to convolutional neural networks (CNNs), for graphs is not a simple task [6]. The multifaceted nature of graphs, encompassing diverse attributes and characteristics, which necessitates different model architectures. The sheer magnitude of modern graphs, often encompassing millions or even billions of nodes and edges, prompting a dire need for scalable models. The interdisciplinary integration of graphs with domains like biology, chemistry, and social sciences. While this combination offers opportunities by leveraging domain-specific knowledge, it simultaneously increases the complexity of model conception.
In contemporary research, graph neural networks (GNNs) have garnered significant attention. The architectures and strategies employed exhibit vast heterogeneity, spanning from supervised to unsupervised ones, and from convolutional frameworks to recursive ones, inclusive of architectures like graph recurrent neural networks (Graph RNNs) [7], graph convolutional networks (GCNs) [8], graph autoencoders (GAEs) [9], graph reinforcement learning (Graph RL) [10], graph adversarial methodologies [11] as well as graph attention networks (GATs) [12]. To elaborate further, Graph RNNs discern recursive and sequential graph patterns by modeling states at the level of nodes or the graph. GCNs establish convolutional and readout processes on non-uniform graph matrices, capturing common local and global patterns. On the other hand GAEs presuppose low-rank graph structures and employ unsupervised techniques for node representation. Moreover, Graph RL contextualizes graph-based actions and feedback, adhering to preset constraints. Graph adversarial methodologies employ adversarial training strategies to augment the generalization potential of graph-based models, evaluating their resilience against adversarial attacks. Last but not least graph attention networks (GATs) are in essence an improvement over graph convolutional networks. They are built upon the concept of self-attention a process that has been popularized by transformer based models such as BERT and GPT-3 [13]. Through this mechanism the importance of each node in the graph is learned dynamically instead of explicitly assigned as in GCNs. Such attention mechanisms are being utilized for a wide variety of tasks such as natural language undestanding and computer vision.
Recent breakthroughs in multimodal single-cell technologies have ushered in an era where simultaneous acquisition of diverse omics data from individual cells is feasible, offering profound insights into cellular states and dynamics. Nonetheless, crafting joint representations from such multimodal datasets, identifying the intricacies between modalities, and most crucially, assimilating the plethora of unimodal datasets for
subsequent analysis, remain formidable challenges. Graph neural network-based frameworks present a promising avenue to tackle these obstacles, thereby streamlining the analysis of multimodal single-cell data. Over the past three years, numerous GNN-based tools have been proposed for various types of single-cell analyses, the latest of which will be delved into in ensuing sections. **Fig. 1** offers an overview of the functioning of GNN-based tools in the context of single-cell data analysis.
## 2 Recent GNN-based single-cell-sequencing analysis tools
Over the past four years an impressive amount of GNN-based tools has emerged proposed for various tasks that are staples of single-cell sequencing data downstream analyses, such as imputation, clustering and cell annotation/identification among others. In this section and the following subsections we are going to explore the forefront of GNN-based single-cell data analysis, delving into 39 cutting edge computational methods that aid in highlighting and discerning the intricacies inherent to cellular heterogeneity. These tools have been implemented for various parts of single-cell
Figure 1: The figure delineates a representative pipeline for single-cell sequencing data analysis via Graph Neural Networks (GNNs). On the left, one observes the typical inputs for the GNN architecture, primarily encompassing a single-cell omics expression matrix and a graph, depicted here as an adjacency matrix. This graph could encapsulate interactions of various kinds, including but not limited to cell-cell or gene-gene interactions. The central upper segment illustrates a generalized GNN framework, purposed for the generation of refined vector representations of the input data. Several GNN architectures, highlighted in the central lower portion, such as graph variational autoencoders, graph attention networks, and graph convolutional networks, can be employed to this end. As articulated on the right, these vector representations, once crafted by the GNN, can be further leveraged for numerous subsequent analytical tasks in single-cell analysis. These tasks include, but are not limited to, cell-cell interaction inference, cell-type annotation, imputation of absent values, Gene Regulatory Network (GRN) inference, cell clustering, and dimensionality reduction.
omics data analysis including but not limited to imputation, clustering, dimensionality reduction, cell annotation, GRN inference and cell-cell interaction inference among others. Through analyzing these state-of-the-art we are able to obtain an extensive understanding of recent progress taking place in graph-based methods for single-cell analysis. Details about the tools discussed in this work are summarized in **Table 1**.
### Imputation
A major characteristic of scRNA-seq data is their pronounced sparsity, manifested as a heightened fraction of observed "zeros." These zeros, representing instances where no unique molecular identifiers (UMIs) or reads map to a specific gene in a cell, pose significant challenges for accurate downstream analyses. Addressing this sparsity is of paramount importance in order to harness the true potential of scRNA-seq data. In this context, "imputation" has surfaced as a salient approach. Imputation techniques, drawing parallels from the genomics realm where missing or unobserved genotype data is estimated, endeavor to infer the most plausible values in place of these zeros, thereby providing a more comprehensive representation of the cellular transcriptome [14]. However, conventional machine learning imputation strategies have been observed to grapple with the intricacies of scRNA-seq data, particularly its non-linear associations and unique counting architectures. This inadequacy underscores the necessity for more advanced methodologies [15]. Among the myriad of solutions, graph deep learning-based imputation has emerged as particularly promising. Such models, leveraging graph neural networks, are adept at discerning the intricate relationships between cells, enabling them to impute missing data with heightened precision.
In this vein, ScGAEGAT [16] is a multi-modal model integrating graph autoencoders and graph attention networks, specifically tailored for scRNA-seq analysis through graph neural networks. Its gene imputation performance is measured using metrics such as cosine similarity, median L1 distance, and root-mean-squared error, whereas its cell clustering capability is evaluated using adjusted mutual information, normalized mutual information, completeness score, and Silhouette coefficient score. Designed on a foundation of multi-modal graph autoencoders (GAEs) and graph attention (GAT) networks, scGAEGAT adeptly models heterogeneous cell-cell interactions and intricate gene expression patterns inherent in scRNA-seq data. Operating through the use of an encoder-decoder deep learning framework, the model not only facilitates gene imputation and cell clustering but also offers a comprehensive vantage point for probing cellular interactions by discerning associations across the entire cell population.
The architecture of scGAEGAT involves the incorporation of a pre-treatment gene expression matrix, as input to a feature autoencoder. This component constructs and subsequently refines the cell graph using acquired embeddings. By infusing a graph attention mechanism into the graph autoencoder, the model takes the cell graph that has been created as input and assigns variable weights to individual nodes, enhancing its capacity to delineate cellular relationships and achieve precise cell-type clustering. In this sophisticated setup, each cell type benefits from a dedicated cluster autoencoder to reconstruct gene expression values. These reconstructed values are subsequently looped back as fresh inputs in iterative rounds until a point of convergence is achieved.
Central to scGAEGAT's efficacy is the GAT mechanism, granting the model the ability to allocate differential weights to cells via the GAT coefficient. Throughout its iterative operations, the model is composed of four robust autoencoders: a feature autoencoder addressing gene expression regulation, a graph autoencoder establishing cell-cell interactions, a cluster autoencoder pinpointing cell type, and an imputation autoencoder dedicated to the recovery of the gene expression matrix. By harnessing topological insights between genes, scGAEGAT conveys cell-cell interactions via a low-dimensional embedding derived from multiple autoencoder structure training vectors. This mechanism proves pivotal for cell type aggregation, trajectory-based cell arrangement inference, and the imputation of any absent data to enhance gene correlations.
Similarly, envisioned as a dropout imputation method, GNNImpute [17] is an advanced autoencoder network that leverages graph attention convolution to aggregate multi-level cell similarity information, utilizing convolution operations on non-Euclidean space for scRNA-seq data. Unique from existing imputation methods, this model proficiently imputes dropouts while simultaneously mitigating dropout noise. Its performance is evaluated using metrics like mean square error (MSE), mean absolute error (MAE), Pearson correlation coefficient (PCC), and Cosine similarity (CS). Through constructing the scRNA-seq data graph, GNNImpute employs a graph attention convolutional layer to discern and select similar adjacent nodes. Following this, it aggregates these analogous neighboring nodes. Nodes within this graph can perpetually relay messages in the direction along the edges until a stability point emerges. Through this mechanism, GNNImpute facilitates the embedding of cellular expressions from identical tissue regions into low-dimensional vectors via its autoencoder architecture. Not only does it discern co-expression trends among similar cells, but it also filters sequencing technical noise from dropout imputation, enhancing subsequent scRNA-seq data analysis.
A salient feature of GNNImpute is its incorporation of an attention mechanism, which enables weight assignment to different cells based on attention coefficients. This approach fosters the establishment of nonlinear relationships between genes by learning low-dimensional embeddings of expressions through its autoencoder network. In contrast to tools like DCA, GNNImpute possesses the capability to discern gene co-expression patterns among similar cells by aggregating information from multi-level neighbors. The structure of GNNImpute comprises both an encoder and a decoder. The encoder encompasses two graph attention convolutional layers, focusing on the transmission of neighboring node information. Conversely, the decoder is composed of two linear layers. GNNImpute utilizes a masked expression matrix as its model input, and its output serves as the basis for loss value computation, with model parameters subsequently being optimized by this derived value.
Moreover, ScDGAE [18] is a sophisticated directed graph neural network designed for scRNA-seq analysis, integrating both graph autoencoders and a graph attention network. Distinguished from traditional models, directed graph neural networks not only uphold the intrinsic connection attributes of the directed graph but also enhance the receptive field of convolution operations. To gauge its proficiency in gene imputation, scDGAE is evaluated using metrics such as cosine similarity, median L1
distance, and root-mean-squared error. Meanwhile, its cell clustering capabilities are assessed with tools like adjusted mutual information, normalized mutual information, completeness score, and Silhouette coefficient score.
scDGAE adeptly models heterogeneous cell-cell associations and the gene expression patterns inherent in scRNA-seq data. Beyond merely facilitating gene imputation and cell clustering, scDGAE presents an encoder-decoder deep learning architecture optimized for scRNA-Seq analysis. This innovative framework offers a comprehensive view, capturing the intricate relationships spanning the entire cell population.
The scGNN 2.0 [19] model is composed of a triad of stacked autoencoders, preceded by a program initialization. Within this phase, the data undergoes preprocessing, and a left-truncated mixture Gaussian (LTMG) model is applied. This statistical model adeptly discerns regulatory signals for individual genes, which subsequently serve as regularization criteria within the feature autoencoders. Contrary to the approach in scGNN 1.0, where imputation is contingent on a distinct imputation autoencoder activated post-iteratively, this supplementary autoencoder has been avoided in the 2.0 version, retaining the three autoencoders. Within the graph autoencoder phase, a multi-head graph attention mechanism is harnessed to calculate a graph embedding.
Upon conclusion of the final iteration, scGNN 2.0 creates: (i) an imputed gene expression matrix in csv format, structured with cells as rows and genes as columns; (ii) a graph embedding matrix in csv format, arranged with cells as rows and the graph embedding dimensions as columns; (iii) a cell graph in csv format as an edge list, with three columns constituting the starting node, the terminal node, and the edge weight; as well as (iv) a list in csv format, containing cell cluster labels, with the first column detailing cell names and the subsequent column indicating cell labels.
Another GNN-based tool for single-cell data imputation is GE-Impute [20], which has designed to address the dropout zeros in scRNA-seq data through a graph embedding-based neural network model. By acquiring a neural graph representation for every cell, GE-Impute effectively reconstructs the cell-cell similarity network. This reconstruction subsequently offers enhanced imputation of dropout zeros, drawing from more precisely designated neighbors within the similarity network. When analyzing the correlation in gene expression between baseline expression data and its simulated dropout counterpart, GE-Impute demonstrates a notably superior efficacy in retrieving dropout zeros for both droplet- and plate-based scRNA-seq data sets. GE-Impute commences with the construction of a cell-cell similarity network predicated on Euclidean distance metrics. For each cell, a random walk of a specified length is simulated utilizing both BFS (Breadth-First Search) and DFS (Depth-First Search) strategies. Subsequent to this, a graph embedding-based neural network model is tasked with refining the embedding matrix corresponding to each individual cell, predicated on the sample walks. Leveraging the newly-trained embedding matrix, the similarity metrics among cells are recalculated, facilitating the prediction of novel link-neighbors and the reconstruction of the cell-cell similarity network. Conclusively, GE-Impute carries out the imputation of dropout zeros for each cell, achieved by computing the mean expression value of its associated neighbors in the reproduced similarity network.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Name** & **Graph** & **Input** & **Architecture** & **Aim** & **Ref** \\ \hline \hline GENELINK & TI-gene & scRNA-seq + Adjacency Matrix & GAT a & GRN & [21] \\ \hline CELLVGAE & Cell-cell & scRNA-seq & VGAE b + GAT & Dimensionality Reduction & [22] \\ \hline ScGAEGAT & Cell-cell & scRNA-seq & GAE + GAT & Imputation & [16] \\ \hline GNNImpute & Cell-cell & scRNA-seq & GATC c & Imputation & [17] \\ \hline omicsGAT & Cell-cell & scRNA-seq & GAT & Disease prediction & [23] \\ \hline scGAC & Cell-cell & scRNA-seq & GAT d & Clustering & [24] \\ \hline SpaDAC & Cell-cell & Spatial & GAT & Clustering & [25] \\ \hline STAGATE & Cell-cell & Spatial & GATE & Spatial domain detection & [26] \\ \hline ScDGAE & Cell-cell & scRNA-seq & GAT & Imputation & [18] \\ \hline SCEA & Cell-cell & scRNA-seq & GATE & Cell annotation & [27] \\ \hline scGNN & Cell-cell & scRNA-seq & GNN e & Imputation & [19] \\ \hline scGAE & Cell-cell & scRNA-seq & GAE f & Dimensionality Reduction & [28] \\ \hline scASGC & Cell-cell & scRNA-seq & GCN + GAT & Clustering & [29] \\ \hline SCDRHA & Cell-cell & scRNA-seq & DCA + GAT & Dimensionality Reduction & [30] \\ \hline GraphComm & Cell-cell & scRNA-seq + LR data g & GAT & Cell-cell Interactions & [31] \\ \hline HNNVAT & Cell-cell & scRNA-seq & GCN & Cell annotation & [32] \\ \hline cellograph & Cell-cell & scRNA-seq & GCN & DEGs + visualization & [33] \\ \hline DeepMAPs & Cell-gene & scMulti-omics & HGT h & GRN & [34] \\ \hline Stitch3D & Spot-Spot & scRNA-seq + spatial & Encoder + GAT & 3d structure from 2d slices & [35] \\ \hline SpaCI & Cell-cell & Spatial & GAT & Cell-cell Interactions & [36] \\ \hline GE-Impute & Cell-cell & scRNA-seq & GNN & Imputation & [20] \\ \hline PIKE-R2P & PPI & scRNA-seq & GNN & PPI Inference & [37] \\ \hline GLAE & Cell-cell & scRNA-seq & GNN & Clustering & [38] \\ \hline scTAG & Cell-cell & scRNA-seq & GCN & Clustering & [39] \\ \hline scDeepSORT & Cell-gene & scRNA-seq & GNN & Cell annotation & [40] \\ \hline scGPCL & Cell-gene & scRNA-seq & GNN & Clustering & [41] \\ \hline scFEA & Metabolomics & scRNA-seq & GNN & Flux Estimation & [42] \\ \hline ScGMM-VGAE & Cell-cell & scRNA-seq & GMM i + VGAE & Clustering & [43] \\ \hline scAGN & Cell-cell & scRNA-seq & AGN j & Cell annotation & [44] \\ \hline scMRA & Cell type & scRNA-seq & GCN & Cell annotation & [45] \\ \hline scGraph & Gene-gene & scRNA-seq + GIN & GNN & Cell annotation & [46] \\ \hline GLUE & Omics & scMulti-omics & VGAE & Integration & [47] \\ \hline DeepTFni & Tf-gene & scATAC-seq & GNN & GRN & [48] \\ \hline scGCN & Cell-cell & scRNA-seq & GCN & Label Transfer & [49] \\ \hline scHCA & Cell-cell & scRNA-seq + Epigenomics & GNN & Cell-cell interactions & [50] \\ \hline GrID-Net & Cell-cell & scMulti-omics & GNN & Locus-gene associations & [51] \\ \hline GCNG & Cell-cell & Spatial & GCN & LR pair identification & [52] \\ \hline StdGCN & Spot-Spot & scRNA-seq + spatial & GCN & Cell Type Deconvolution & [53] \\ \hline SCAN-IT & Cell-cell & Spatial & GNN & Segmentation & [54] \\ \hline \hline \end{tabular}
\end{table}
Table 1: Recent GNN approaches for single-cell data
### Clustering
Single-cell RNA-sequencing (scRNA-seq) has become instrumental in unraveling cellular heterogeneity, necessitating robust analytical approaches. Central to this analytical arsenal is unsupervised clustering, a pivotal method deployed to discern and classify putative cell types from the plethora of information embedded in scRNA-seq datasets. Despite the notable advancements in clustering algorithms witnessed in recent times, the domain remains riddled with ambiguities. Specifically, the scientific community grapples with the conundrum of delineating the most efficacious approach and establishing definitive criteria for cell type definition based on scRNA-seq readings [55]. Compounding these challenges is the staggering evolution of sequencing technologies, leading to an exponential growth in the volume of single-cell sequencing data. Classical clustering methodologies are increasingly proving inept at managing large-scale data, struggling to maintain efficiency and grappling with the complexities inherent to single-cell datasets [18]. In light of these challenges, graph neural network-based tools emerge as a beacon of promise. By exploiting the inherent network structures within cellular data, these models are adept at navigating the vast and intricate scRNA-seq landscapes.
Based on the above ScGAC, (Single-cell Graph Attentional Clustering) [24], emerges as an unsupervised clustering model devised specifically for scRNA-seq data analysis. The methodology's initial steps involve creating a cell graph that is subsequently refined and rendered more accurate via a network denoising process. Building on this refined graph, scGAC employs a graph attentional autoencoder to capture the nuanced representations of cells which are ideal for clustering. This phase facilitates information flow across cells, each given variable weightings, effectively discerning latent intercellular relationships. To bring the process full circle, scGAC utilizes a self-optimizing strategy in order to acquire cell clusters. Diving deeper into the model, ScGAC's first step encompasses the creation of a cell graph wherein nodes represent individual cells and the edges, carry weightings determined through the Pearson correlation coefficient. To enhance the integrity of this initial graph, any superfluous or inconsistent edges are meticulously pruned through network denoising. With this streamlined graph in place, ScGAC harnesses a graph attentional autoencoder to learn latent cellular representations, seamlessly integrating insights from gene expression profiles and the intricate intercellular connections. The attention mechanism, enables ScGAC to distribute distinct weights to cellular neighbors as information cascades through the cellular network. In the culmination of this process, ScGAC seamlessly blends representation learning with clustering optimization. Through the self-regulating mechanism, the system evaluates similarities between cells and their respective clustering centroids using a defined membership matrix, Qc\(\times\)n. Memberships are subsequently adjusted via an optimized matrix, Pc\(\times\)n. Qc\(\times\)n is then employed to recalibrate clustering centroids and deliver the final, refined clustering outcome.
Another GNN-based model proposed for single-cell data clustering is SpaDAC (SPAtially embedded Deep Attentional graph Clustering) [25]. It has been introduced as a revolutionary technique to harness multi-modal data, pinpointing spatial domains and concurrently reconstructing denoised gene expression profiles. At its core, SpaDAC is adept at learning low-dimensional embeddings from spatial transcriptomics data. It achieves this by creating multi-view graph modules, meticulously designed to encapsulate both spatial location interconnections and morphological ties. Rather than relying on singular data sources, SpaDAC
synthesizes joint representations, melding complementary multimodal insights. Central to its functionality, the method incorporates an attention mechanism, expertly capturing the similarities among neighboring spatial spots. Moreover, to further refine and bolster the compact nature of these latent representations, SpaDAC incorporates an iterative self-optimizing clustering module, ensuring heightened precision in the clustering outcomes.
In the same vein, scASGC [29] has been proposed as an unsupervised clustering technique derived from an adaptive, streamlined graph convolution model. This innovative approach meticulously constructs credible cell graphs, aggregates information from neighboring nodes using the streamlined graph convolution model, and adaptively ascertains the optimal count of convolution layers tailored to diverse graphs. Initially, the similarity matrix for cells is computed, and, leveraging Network Enhancement (NE), extraneous noise within the similarity matrix is filtered out to establish a trustworthy undirected cell graph. To bolster both the efficiency and precision of the model, a streamlined graph convolution model is employed, integrating the principles of the multi-head attention mechanism. This fosters a heightened discernment of intricate cell-cell interactions and fosters a richer comprehension of latent cell attributes. Consequentially, a dynamic approach is harnessed to determine the ideal quantity of convolution layers for varying graphs, followed by the application of spectral clustering.
In addition, GLAE [38] is an advanced graph-based auto-encoder tailored for scRNA-seq analysis, distinguished by its ability to extract cell relation graphs from a multitude of perspectives and leverage them for cell clustering; in contrast to traditional GNN-based approaches that rely on static graph inputs, GLAE emphasizes dynamic learning, allowing cell relation graphs to evolve and be refined with every iteration, offering a more robust framework compared as noise is filtered during the graph learning process; a hallmark of GLAE is its unique modular structure, built to understand cell relation graphs from varied angles in parallel; to address specific challenges, cell relation graphs are reconstituted at each stage based on the gene expression data, facilitating an adaptive update of the graphs throughout training; furthermore, the system employs several parallel sub-modules to examine discrete gene subsets and produce cell relation graphs from diverse viewpoints while the intricate relationship network between gene subsets is discerned through cell embeddings originating from these sub-modules; essentially, GLAE's design focuses on the generation and application of cell relation graphs for optimal cell clustering and subsequent analyses.
GLAE facilitates the extraction of M cell relationship graphs through M individual sub-modules. Within the confines of the gene subset relationship network learning module, a specific network corresponding to gene subsets is derived, subsequently guiding the aggregation of cell relationship graphs and cell embeddings. Each sub-module, encapsulates three pivotal elements: (1) gene subsets learning, (2) cell relationship graph construction, and (3) feature encoding and decoding phases. Each sub-module crafts a relationship graph of cells, which integrates chosen features and acquires data from its immediate predecessor. Cell samples, denoted as X, undergo processing in M sub-modules, leading to the construction of M cell relationship graphs each providing a unique viewpoint. These M graphs, along with the cell embeddings, are chaneled to the gene subset relationship network learning module. Herein, a particular gene subset network emerges and subsequently participates in the aggregation module, creating cell relationship graphs and cell embeddings. In essence, GLAE is partitioned into four distinct sectors: (1) cell relationship graph extraction from multiple
aspects, (2) crafting the gene subset relationship network, (3) aggregating cell relationship graphs and cell embeddings, and (4) clustering of cells.
scTAG [39] is a another pioneering method that blends the functionalities of a deep graph convolutional network to learn cell-cell topological representations and pinpoint cell clusters in a cohesive process; it seamlessly combines the zero-inflated negative binomial (ZINB) model within a topology adaptive graph convolutional autoencoder, achieving a compact latent representation; furthermore, the method leverages the Kullback-Leibler (KL) divergence to optimize clustering. An innovative aspect of scTAG lies in its simultaneous optimization for clustering loss, ZINB loss, and cell graph reconstruction loss, enabling the combined optimization of cluster labeling and feature learning, all while preserving the intricate topological structures. In essence, through the construction of a cell graph, scTAG is adept at maintaining both the intrinsic topological information and intricate cell-cell relations inherent in scRNA-seq datasets.
Moreover scGPCL [41], is introduced as a prototypical contrastive learning method underpinned by graph techniques for scRNA-seq data clustering. This approach primarily harnesses Graph Neural Networks to encode cell representations in a cell-gene graph, preserving the relational information intrinsic to scRNA-seq data. To further refine cell representations, scGPCL employs prototypical contrastive learning, a strategy designed to separate dissimilar pairs semantically while drawing closer the similar ones.
This technique diverges from traditional practices in the domain, where cells are interconnected in a cell-cell graph based on pre-established cell similarities; instead, scGPCL creates a bipartite cell-gene graph, marking connections between cells and genes when specific genes are expressed in designated cells, as evidenced by the provided input gene expression matrix.
ScGMM-VGAE [43] makes use of a Gaussian mixture model-based variational graph autoencoder specifically for scRNA-seq data, achieving enhanced cell clustering. By integrating a statistical clustering model with a deep learning algorithm, the system employs a graph variational autoencoder (VGAE) to process both a cell-cell graph adjacency matrix and a gene feature matrix. This produces latent data which is subsequently clustered by the Gaussian mixture model (GMM) module. A unique loss function, incorporating parameter estimates from the GMM and VGAE, is instrumental in the optimization stage. In the field of high-dimensional scRNA-seq data, the scGMM-VGAE framework specializes in cell clustering. The GMM module takes on the task of clustering cell types from VGAE-encoded latent data. This interaction ensures that GMM parameters significantly influence the VGAE's learning process. Taking the VGAE's low-dimensional representation into account, the probabilistic GMM model navigates the distribution. As a result, latent features which vividly capture the intrinsic characteristics of scRNA-seq data become the cornerstone of the cell clustering process.
### Dimensionality reduction
Given the intrinsic nature of single-cell sequencing as a high-throughput technology, it inherently generates datasets characterized by a vast number of dimensions pertaining to both cells and genes. This multifaceted complexity leads to a phenomenon known as the 'curse of dimensionality,' which is pervasive in single-cell RNA sequencing (scRNA-seq) data. The manifestation of this challenge is further complicated by the fact that not all genes are equally informative or relevant in the context of cell clustering based on expression profiles.
While efforts have already been directed towards the mitigation of this issue through feature selection, it represents merely a preliminary step. The next phase involves the utilization of specialized dimensionality reduction algorithms. Serving as a crucial component in the pre-processing stage, these algorithms function to reduce data complexity, thereby facilitating subsequent visualization and analysis [56].
Traditional, non-machine-learning-based tools, such as t-SNE [57] and UMAP [58], are commonly employed in the field, possibly owing to their relatively straightforward implementation and interpretability. However, these conventional algorithms often encounter limitations when tasked with deciphering interpretable dimensionality reduction from a high-dimensional gene space that is dense and complex. In an effort to circumvent these constraints, deep-learning-based dimensionality reduction techniques have emerged in recent years [59], [60]. Despite this advancement, these deep learning methodologies have typically focused solely on embedding unique cellular features, neglecting to consider the intricate relationships between individual cells. Such an oversight restricts their capacity to unravel the complex topological structure inherent among cellular entities, consequently hindering the analysis of developmental trajectories. GNN-based models offer a promising alternative, demonstrating substantial potential by preserving long-distance relationships within the latent space. This sophisticated approach leverages interconnected network structures, thereby offering a more nuanced understanding of data, and possibly providing insights into previously obscure cellular relationships [28].
In this vein, CellVGAE [22] is a model developed to harness the capabilities of graph neural networks for the unsupervised exploration of scRNA-seq data. Constructed upon the foundation of a variational graph autoencoder (VGAE) architecture enhanced with graph attention layers (GAT), this model is distinctively designed to function on the direct connectivity among cells, placing a significant emphasis on dimensionality reduction and clustering.
Unlike traditional neural networks that primarily source information from gene expression values, CellVGAE capitalizes on cell connectivity, depicted as a graph, as its inductive bias. This facilitates the execution of convolutions on non-Euclidean structures, aligning seamlessly with the geometric deep learning paradigm. For operational efficiency, CellVGAE incorporates k-nearest neighbor (KNN) alongside Pearson correlation graphs, commonly referred to as PKNN, recognized for their proficient performance and extensive application within the field.
Similarly, scGAE (Single-Cell Graph Autoencoder) [28] has been introduced as a dimensionality reduction technique adept at retaining the topological structure inherent in scRNA-seq data. scGAE creates a cell graph and employs a multitask-centric graph autoencoder, ensuring simultaneous preservation of both topological structure details and feature information of scRNA-seq data. This methodology has been further adapted to cater to visualization, clustering, and trajectory inference in scRNA-seq data.
The model combines the strengths of both deep autoencoders and graph-based models, facilitating the embedding of the intricate topological framework of high-dimensional scRNA-seq data into a more low-dimensional latent space. Subsequent to procuring the normalized count matrix, scGAE formulates an adjacency matrix amid cells utilizing the K-nearest-neighbor algorithm. Through graph attentional layers, the encoder transitions this count matrix into a low-dimensional latent space. scGAE then interprets the embedded data via
both a feature decoder and a graph decoder. While the feature decoder's objective is the reconstruction of the count matrix to maintain feature information, the graph decoder aims to recover the adjacency matrix, ensuring retention of topological structure details. The embedded data is subsequently decoded to the same dimensions with the original dataset, targeting a minimized distance between the input and the reconstructed data. For the dual objectives of learning data embedding and cluster designation, deep clustering is employed.
SCDRHA [30] is structured around two key components: the Deep Count Autoencoder (DCA) dedicated to data denoising and a graph autoencoder, designated for data projection into a low-dimensional latent space. The graph autoencoder, constructed on the Graph Attention Network (GAT) framework, aims to project the data into a latent expression while preserving cellular topological structures. Given that the graph autoencoder receives single-cell graphs as input, consisting of node matrices and an adjacency matrix, the adjacency matrix--crafted via the K-nearest-neighbor (KNN) algorithm--holds significant implications.
However, due to the pronounced sparsity of scRNA-seq data, the KNN algorithm can distort the adjacency matrix. To address the challenges presented by dropout events on the KNN algorithm's output, their impact has been underscored and the robust denoising capabilities of DCA to counteract zero inflation resulting from such dropouts have been leveraged. As both the original and the DCA-reconstructed data share the same dimensionality, an initial dimensionality reduction for the latter is executed using Principal Component Analysis (PCA). Employing the latent space delineated by PCA, the graph autoencoder operates to further reduce dimensions, creating a low-dimensional embedding conducive for both visualization and clustering. The primary model, DCA, is tailored to counteract dropout events and is trained via a ZINB model-based autoencoder. The subsequent model, a GAT-based graph autoencoder, serves to map the DCA-denoised data into a low-dimensional latent space. The process initiates by normalizing raw data, followed by data denoising using DCA. The culmination of the process witnesses the integration of the PCA-compressed matrix and the adjacency matrix, fed into the GAT-based graph autoencoder, yielding a low-dimensional embedding.
### 2.4 Cell annotation
Cell annotation, refers to advanced computational methodologies, that facilitate the efficient labeling of individual cells or cell clusters through the integration of algorithmic processes and existing biological knowledge. By identifying specific gene expression signals or signatures within a cell or cluster that correspond to recognized patterns of known cell types or states, the targeted cell or cluster can be accordingly labeled. This technique is a part of the broader framework of cell type annotation, a process that assigns identities to cells or clusters based on their gene expression profiles. The intricate resolution of cellular heterogeneity across diverse tissues, developmental stages, and organisms achieved through this approach enhances our understanding of cellular dynamics and gene functionality. This comprehensive perspective not only elucidates fundamental biological processes but also offers critical insights into the underlying mechanisms governing both healthy physiological states and pathological conditions [61]. Despite continuous advancements, the effectiveness of existing cell identification/annotation methods remains constrained, a limitation that can be attributed in part to the incomplete utilization of higher-order cellular characteristics. In
response to these inherent challenges, research is progressively turning to deep learning models, including Graph Neural Networks (GNNs). These advanced computational frameworks are uniquely capable of evaluating both low-order and high-order data features, synthesizing them through specialized network topologies.
In this vein, SCEA [27] represents a novel clustering methodology tailored for scRNA-seq data. It leverages two distinct components for dimensionality reduction coupled with a self-optimizing mechanism for cell annotation. This involves an initial application of a multi-layer perceptron (MLP)-based encoder followed by a GAT - graph attention auto-encoder. By integrating two sequential units, SCEA is able to obtain both accurate cell and gene embeddings.
The SCEA algorithm encompasses several stages: a) preprocessing of input data, b) graph creation and denoising, c) dimensionality reduction, and d) data clustering using the K-means algorithm. A graph is initially constructed utilizing the Pearson's correlation coefficient method. Subsequent to this, graph pruning is facilitated using the Network Enhancement (NE) method. This method deploys a doubly stochastic matrix to identify and eliminate noisy edges. Notably, a matrix is deemed doubly stochastic only when all its entries are non-negative, and the summation of elements in each row and column equals one. Among non-negative matrices, stochastic and doubly stochastic matrices are distinguished by several unique properties. Dimension reduction is then achieved in two stages: the initial phase employs an encoder based on the MLP architecture, and subsequently, a graph attention autoencoder utilizes a cell graph to further reduce the dimensionality of the encoder's output. By capitalizing on the denoised cell graph that captures cell connectivity information, the graph attention autoencoder discerns cellular relationships, enhancing the overall clustering outcome.
In addition, HNNVAT [32] presents an advanced adversarial dense graph convolutional network architecture tailored for single-cell data classification. Central to its design, the architecture takes advantage of the integration of a dense connectivity mechanism and attention-based feature aggregation to augment the representation of sophisticated higher-order features, while fostering a seamless combination amongst them for feature learning in convolutional neural networks. A distinctive feature reconstruction module, is incorporated to ensure the fidelity of the original data's features, fortifying its primary classification objective. Additionally, HNNVAT harnesses the potential of virtual adversarial training, amplifying its generalization capabilities and robustness. The processed single-cell data is channeled into a composite structure merging both a fully connected network and a graph convolutional neural network. This incorporation of dense connectivity and attention-based aggregation is pivotal for enhancing higher-order feature representation. An encoder-decoder based feature reconstruction module is seamlessly integrated to uphold the intrinsic features of the original dataset.
HNNVAT commences with data preprocessing, composed of two parts: filtering cells and atypical genes, and feature selection. Leveraging the processed data, an adjacency matrix is crafted, containing the top 1,000 genes for each respective dataset. Non-diagonal elements embody interactions among distinct genes, sourced from the STRING database. This process culminates in a weighted adjacency matrix, underpinned by the probabilities of gene interactions. When geared towards single-cell classification, the convolutional neural network's input is a matrix with gene expression values. Architecturally, HNNVAT is composed of four
key modules: a hybrid neural network, an attention-based convolutional hierarchical aggregation, local feature reconstruction, and virtual adversarial training. The model's initiation involves inputting both the gene expression matrix and the gene adjacency matrix into the four-layer graph convolutional neural network, so that node feature extraction can be performed. Recognizing the inherent risk of model overfitting from the extracted high-order features, direct connections are strategically positioned between hidden layer neurons and each subsequent convolution layer. This ensures that insights gleaned from nodes in ensuing layers are harnessed to further refine the model.
Another GNN-based cell annotation tool is scDeepSort [40]. It is a cutting-edge cell-type annotation tool tailored for single-cell transcriptomics, powered by a deep learning model based on a weighted graph neural network (GNN); its efficacy has been highlighted through rigorous testing on both human and mouse scRNA-seq datasets, confidently annotating a massive collection of 764,741 cells spanning 56 human and 32 mouse tissues; driven by the innate graph structure inherent to scRNA-seq data, where cells express genes, scDeepSort leverages this dynamic, basing its training on comprehensive single-cell transcriptomic atlases: the human cell landscape (HCL) and mouse cell atlas (MCA); each cell's unique graph network, which encompasses its genes and neighboring cells, plays a pivotal role in the supervised learning of scDeepSort using known cell labels extracted from these atlases for every tissue.
scDeepSort, is comprised of three pivotal components: an embedding layer that retains the graph node representations (remaining frozen during training), a weighted graph aggregator layer adept at learning graph structural details and creating a linear feature space for cells (with a modified GraphSAGE framework acting as the backbone GNN), and, last but not least, a linear classifier layer tasked with categorizing the final cell state representation into one of the established cell type classes.
scAGN [44] employs an attention-based graph neural network, specifically designed for detecting cell types within scRNA-seq datasets through label-propagation. This approach leverages transductive learning, a unique technique where both training and testing datasets are involved during the learning process. By examining patterns across the combined datasets, the model identifies patterns that are then utilized to predict labels for the unlabeled test data points.
The unique attention-based graph neural network, stands out by eliminating intermediate fully-connected layers. Instead, propagation layers are replaced by attention mechanisms, ensuring the integrity of the graph structure is maintained. This sophisticated attention system enables an adaptive, dynamic summarization of local neighborhoods, paving the way for more precise predictions. When applying the scAGN technique, single-cell omics data are inputted post batch-correction, utilizing both canonical correlation analysis and mutual nearest neighborhood (CCA-MNN). Through the power of transductive learning, scAGN can derive cell labels for query datasets using reference datasets with pre-established or expert-annotated labels.
In addition, scRNA [45] introduces a knowledge graph that embodies the unique traits of cell types present in multiple datasets. This graph then pairs with a graphic convolutional network, which acts as a distinguishing mechanism. The aim of scRNA is twofold: to preserve the inherent closeness within cell types and to maintain the spatial positioning of these cell types throughout various datasets. By combining multiple datasets, a meta-dataset emerges, forming the basis for annotation, and as an added bonus, scRNA has the prowess to account for and eliminate batch effects.
scMRA is capable of harnessing insights from several well-documented datasets and applying this information to unlabelled target data. The training part for this model is broken down into four pivotal stages. Initially, sequencing data, plagued by dropout events during the sequencing process, are aligned. This alignment is achieved by leveraging a ZINB model-based denoising autoencoder. Following this, the next step involves computing a prototype representation for each cell type across all datasets. This global prototype is then updated through a moving average strategy, which aids in tempering the inherent randomness of these calculations. It also highlights the variability of identical cell types when viewed across different datasets. Building on this, the third stage sees the creation of a knowledge graph based on the global prototype. This graph becomes a comprehensive source of information, with the weight of connections between two prototypes being contingent on their mutual resemblance. The knowledge graph is further broadened by integrating samples from a query batch. Finally, a graphic convolutional network (GCN) is deployed. Its primary role is to efficiently relay feature representations across the expanded graph, leading to the derivation of a classification probability for every individual node.
ScGraph [46] is another cutting edge cell annotation algorithm that harnesses gene interaction relationships to augment cell-type identification efficacy. It employs a graph neural network to aggregate information from interacting genes. By, utilizing the gene interaction network it mitigates technical disruptions and autonomously discern cell types. Integration of gene expression and gene interaction data enables ScGraph not only to determine the cell type of specific cells but also to highlight vital gene interaction relationships from experimental data. Training on the Human Cell Landscape (HCL) dataset allowed ScGraph to identify cell types in a distinct human scRNA-seq dataset using the acquired model, showcasing its aptitude for precise cell-type identification using a reference dataset. ScGraph capitalizes on gene interaction relationships to accumulate neighboring information for each gene, subsequently enhancing cell embedding and identification. To gauge ScGraph's performance with diverse foundational networks, seven human gene interaction networks and one mouse gene interaction network were examined.
In essence, ScGraph is a graph neural network that receives scRNA-seq data and a gene interaction network as its inputs to automatically infer cell labels. It is structured into three distinct modules: (i) a graph representation module, (ii) a feature extraction module, and (iii) a classification module. The gene interaction network can be naturally visualized in a graph where each node represents a gene and edges denote the relationships between them. Within the graph representation module, constructed as a single graph convolutional layer, every node's data is refined by aggregating information from its adjacent nodes. This module incorporates a modified GraphSAGE convolutional layer for enhanced representation.
### 2.5 GRN inference
Gene expression, a cornerstone of cellular functions, is regulated through intricate gene relations, the understanding of which can reveal potential drug targets. These interactions can be visualized through a gene regulatory network (GRN), where nodes represent genes and directed edges signify causal gene-to-gene effects. Traditionally, GRN inference relied on analyzing steady-state data from gene knockout experiments, observing alterations in gene expression upon silencing specific genes [62]. However, the emergence of single-cell multi-omics technologies has ushered in advanced computational techniques that utilize genomic,
transcriptomic, and chromatin accessibility data, offering substantial precision in GRN inference [63]. Existing tools have faced challenges in inferring the dynamic biological networks present across varied cell types and their reactions to external stimuli. Viewing supervised GRN inference through the lens of a graph-based link prediction problem offers a promising approach, emphasizing the generation of vector gene representations to predict potential regulatory interactions. Graph neural networks, tailored for network representation generation, demonstrate exceptional efficacy in GRN inference even with limited cell samples.
For instance, GENERALLink [21] has been introduced to deduce latent interactions between transcription factors (TFs) and their respective target genes within gene regulatory networks (GRN). This is achieved by utilizing a graph attention network (GAT) framework. GENERALink adeptly maps single-cell gene expressions associated with observed TF-gene pairs into a latent space. Subsequently, distinct gene representations are learned for the purpose of downstream similarity evaluations or causal inference between gene pairs by refining the embedding space. Utilizing a GAT-based supervised framework, the GRN is inferred by deriving low-dimensional vector representations from single-cell gene expressions, given an incomplete prior network. Intriguingly, GAT integrates self-attention mechanisms within graph neural networks, adapting them to directed graphs, making them viable for both transductive and inductive applications. The supervised GRN inference challenge is regarded as a link prediction task: with a predetermined set of genes and some of their known interactions, the goal is to ascertain other concealed connections within the network. GENERALink harnesses GAT to cohesively assimilate gene interactions from prior knowledge with gene expression data sourced from scRNA-seq experiments.
The model is architecturally designed with a dual GAT layer structure followed by MLPS to learn low dimentional gene representations. The multi-layer perceptrons (MLPs) further refine low-dimensional gene representations, facilitating subsequent similarity measurements or causal inferences between genes. Multi-head attention mechanisms are employed to model regulatory intensity and refine node representations. GENERALINK requires concurrent input of single-cell gene expression data and a prior adjacency matrix. Throughout the message-passing phase, attention mechanisms are leveraged to assign implicit weights to neighboring nodes. Additionally, two GAT layers are incorporated to identify higher-order neighbor data. Post GAT layers, pairwise genes, designated as I and j, are independently inputted in two channels. Both comprise multi-layered neural networks, generating dense gene representations. The dot product is employed as the scoring mechanism to measure the similarity between genes i and j.
Similarly, DeepMAPS [34] presents a sophisticated approach to biological network inference from scMulti-omics, interpreting this data through a heterogeneous graph. By leveraging both local and global contexts, it discerns intricate relationships among cells and genes with the aid of a multi-head graph transformer. The underlying strength of DeepMAPS lies in its foundation: the heterogeneous graph transformer (HGT). This model creates a heterogeneous graph where cells and genes are represented as nodes and their relationships serve as edges. Notably, it captures both immediate and global topological attributes to deduce cell-cell and gene-gene relationships alike. Additionally, its built-in attention mechanism evaluates the relevance of genes to specific cells, enhancing the depth of gene contribution information. Unlike many traditional models, HGT operates without binding to pre-existing assumptions, allowing it the flexibility to highlight often overlooked gene regulatory relationships. The
methodology of DeepMAPS is both intricate and comprehensive. Initial steps involve preprocessing of the data, with emphasis on discarding low-quality cells and genes with low expression values. Once filtered, data is normalized, leading to the creation of an integrated cell-gene matrix that captures the collective activity of genes within individual cells. From this matrix, a heterogeneous graph is created where cells and genes are represented as nodes and edges represent the existence of genes in individual cells. This sets the stage for the HGT model to determine low-dimensional embeddings for cells and genes, while also identifying the importance of genes in relation to cells through the use of attention scores. Based on these insights, DeepMAPS then proceeds with predicting cell clusters and functional gene groupings. Ultimately, the system infers a diverse array of biological networks specific to each cell type.
Moreover, DeepTFni [48] has been designed to infer Transcriptional Regulatory Networks (TRNs) from single-cell assays that assess transposase-accessible chromatin through sequencing (scATAC-seq) datasets. With the integration of a graph neural network, optimized for network representation, DeepTFni displays superior efficacy in TRN deduction, even when confronted with a sparse cell count. It should also be noted that, DeepTFni's application unveiled important Transcription Factors (TFs) that play a central role in tissue evolution and tumorigenesis. It has been highlighted that numerous genes associated with mixed-phenotype acute leukemia underwent a significant transformation within the TRN, even when the changes in messenger RNA levels remained relatively stable.
DeepTFni harnesses the capabilities of VGAE to calculate latent embeddings and the holistic topology of a graph. Throughout it's operation, the TRN (Transcriptional Regulatory Networks) is visualized as an undirected graph \(G\epsilon\{V,E\}\), with nodes (V) representing TFs and edges (E) indicating their interactions. DeepTFni's input is a scATAC-seq count matrix, which is then processed to yield the imputed TRN. Viewing TRN inference as a link-prediction problem, DeepTFni operates in three stages. Initially, a TRN skeleton is formulated, acting as an incomplete prior. This structure encompasses a set of TF pairs that have a maximum likelihood of exhibiting regulatory interactions. This preliminary blueprint not only offers a genuine benchmark for discerning new TF interactions during model training but also stands as the ground truth during inference. This initial phase is propelled by a meticulous examination of accessible TF gene promoters, and interactions are determined based on TF motif occurrence in another TF's accessible promoter. Subsequently, the TRN blueprint is represented through the use of an adjacency matrix. In the subsequent phase, the node feature, represented by a regulatory potential (RP) score, is computed using scATAC-seq data. This RP score, drawn from the MAESTRO methodology, mirrors the aggregate regulation imposed by adjacent scATAC-seq peaks on a specific gene within a cell. In the final stage, the VGAE model is constructed with an encoder, constituted by a two-layer graph convolution network, and a decoder, which employs an inner product succeeded by a logistic sigmoid function. The encoder processes the initial adjacency matrix and the node feature matrix, yielding latent representations for each TF node. These latent representations are then transformed into TF interactions via the decoder. To minimize any inadvertent effects of the stochastic model elements, a ten-pass prediction strategy has been implemented, ensuring the robustness of TF interactions. The final output is the imputed TRN, which is represented by the restructured adjacency matrix.
### Cell-cell interactions
Cell-cell interactions (CCIs) are pivotal for the seamless functioning and development of multicellular organisms. These interactions, ranging from physical connections to intricate communication pathways, allow cells to transmit vital information to one another [64]. For instance, one cell might release specific molecules, such as growth factors, which then bind to receptors on neighboring cells, resulting in changes in gene expression. With the advent of single-cell RNA sequencing (scRNA-seq) technology, scientists now have a powerful tool to delve deeper into these interactions at a granular level, offering computational insights into how individual cells communicate and function within a complex system [65]. Most cell-cell interaction inference methodologies, however, frequently fall short in capturing the convoluted interplay within cells, and the overarching influence of pathways and protein complexes. To navigate this complexity and harness the rich relationships between cells, ligands, receptors, and various annotations, the advent of graph-based deep learning has proven to be effective for cell-cell interaction inference.
For example, GraphComm [31] is introduced as a novel graph-based deep learning approach tailored for the prediction of cell-cell communication (CCC) within single-cell RNAseq datasets. Through the leveraging of an established model and subsequent refinement of a network based on single-cell transcriptomic data, GraphComm proficiently forecasts CCC activity spanning various cells. Furthermore, it discerns its impact on downstream pathways, spatially proximate cells, and alterations resultant from drug interventions. Central to its functionality, GraphComm harnesses ligand-receptor annotations, encompassing protein complex data and pathway details, alongside expression values. This combination facilitates the crafting of cell interaction networks. Employing feed-forward methodologies, it endeavors to master an optimal data representation, subsequently predicting the likelihood of CCC interactions. GraphComm can allocate numerical values that elucidate the interrelationships amongst cell clusters, ligands, and receptors. This capability empowers the extraction of communication probabilities associated with specific ligand-receptor connections, rendering a hierarchical ranking of CCC activities.
In its methodology, GraphComm makes use of a scRNAeq dataset paired with a meticulously curated ligand-receptor database, initiating the formulation of a directed graph that mirrors the inherent CCC ground truth. Augmenting this with both protein complex data and pathway annotations, a Feature Representation technique is employed to extract the spatial information of ligands and receptors within this directed graph. Subsequently, GraphComm creates a secondary directed graph that delineates the associations between cell groups and ligands/receptors. This graph is enriched with transcriptomic details, cell group insights, and positional attributes sourced from the prior Feature Representation step, resulting in the acquirement of updated numerical node attributes through a Graph Attention Network. These Node Features can be strategically employed through inner product operations to calculate communication probabilities for all feasible ligand-receptor pairings. When these calculated probabilities, derived from the Graph Attention Network, are integrated with the secondary directed graph, it facilitates the establishment of ligand-receptor connections characterized by top-tier CCC activity. This sophisticated model provides a foundation for visually representing activities at both the ligand-receptor and cell group levels.
Similarly, spaCI [36], is an adaptive graph-based deep learning model equipped with attention mechanisms, seeks to interpret cell-to-cell interactions derived from SCST profiles.
This model integrates both the spatial positioning and gene expression profiles of cells, aiming to highlight the active L-R signaling axis among adjacent cells. Crucially, spaCI is adept at recognizing the upstream transcriptional factors that facilitate these active L-R interactions.
spaCI's methodology capitalizes on the inherent spatial connections between cells, combined with the expression data from all genes to identify the L-R interactions. Within the framework of spaCI, genes are projected into a latent space through a dual-component system: a gene-centric linear encoder and a cell-centric attentive graph encoder. This dual approach ensures the inclusion of both gene expression patterns and spatial cellular associations into the latent space. To guarantee distinct delineation between interactive and non-interactive pairs, spaCI employs a triplet loss mechanism.
Another GNN-based model for cell-cell interaction inference is scHGA (single-cell heterogeneous graph cross-omics attention model) [50]. scHGA is grounded in a heterogeneous graph neural network that incorporates two attention mechanisms. It facilitates a unified analysis of single-cell multi-omics data sourced from diverse protocols, such as SNARE-seq, scMT-seq, and sci-CAR. To circumvent the inherent heterogeneity in single-omics data, scHGA adeptly learns a cell association graph to extract neighboring cell information. The hierarchical attention mechanism allows for the calculation of a latent representation of aggregated cells, thus integrating knowledge across diverse omics. This, in turn, aids in deconstructing cellular heterogeneity and offers an enhanced methodology to categorize cellular characteristics. Evidently, scHGA signifies a profound application of graph neural networks for the analysis of single-cell multi-omics, shedding fresh light on the comprehension of single-cell sequencing datasets.
The model receives a preprocessed single-cell transcriptome expression matrix and a single-cell epigenomic peak matrix as foundational input. It combines bipartite graphs of multi-omics to create a heterogeneous graph. Subsequent to this, cell neighbors within single-omics data are aggregated via cell similarity derived from a meta-path, referred to as node-level attention. Acknowledging the varied significance of different omics in cellular representation, the model aggregates multi-omics cell neighbors via cross-omics attention, resulting in the final latent representation. Such a structured approach empowers scHGA to adeptly highlight cellular heterogeneity, reinforcing relations between cells.
### Other applications
It is imperative to underscore that beyond the aforementioned six primary categories, GNN-based strategies have been also been proposed for specialized analyses in other realms of single-cell omics data. For instance, OmicsGAT [23] is a novel graph attention network (GAT) proposed for cancer patient stratification as well as disease phenotype prediction. The model designed to adeptly blend graph-based learning with a dedicated attention mechanism for RNA-seq data analysis. Within its architecture, the multi-head attention mechanism distinguishes itself by proficiently extracting information from specific samples. This is achieved by distributing varied attention coefficients to the adjacent samples. Detailed experiments on The Cancer Genome Atlas (TCGA) datasets, including breast and bladder cancer bulk RNA-seq data, coupled with two single-cell RNA-seq datasets, yielded two essential conclusions. First, OmicsGAT effectively incorporates neighborhood data of a particular sample and meticulously calculate an embedding vector. This tailored vector considerably enhances disease phenotype predictions, refines stratifications among cancer patients, and optimizes
cell sample clustering. Second, the attention matrix, derived from the multi-head attention coefficients, presents a rich depth of valuable information. Impressively, this matrix surpasses the capabilities of standard sample correlation-based adjacency matrices.
When examining a particular sample, each of the h heads in the network allocates distinct attention coefficients to the adjacent samples. Among these, the most prominent coefficient is selected for each neighboring sample, indicating its association with the target sample. Adopting this method, a full attention matrix is constructed. This result highlights the intrinsic value of the attention mechanism, especially when coupled with traditional insights provided by the adjacency matrix. Within the OmicsGAT framework, an embedding is derived from gene expression datasets. This methodology is grounded in the notion that samples, whether they represent patients or cells, exhibiting similar gene expressions are likely to demonstrate analogous disease trajectories or cell types, forming inherent interconnectedness. Some neighboring samples might exert a more significant influence on subsequent predictions or cluster formations, a subtlety that standard similarity metrics may overlook. Addressing this, OmicsGAT dynamically adjusts attention levels across neighbors of a sample for a singular head during the embedding generation. To capture diverse insight from neighbors and ensure a robust learning process, this strategy is mirrored across multiple heads, encapsulating many independent attention mechanisms within a multi-head model.
Moreover, STAGATE [26] is a sophisticated graph attention auto-encoder designed to adeptly pinpoint spatial domains. Its proficiency stems from its ability to learn low-dimensional latent embeddings by integrating spatial information with gene expression profiles. Notably, to delineate the spatial similarity at the boundaries of these domains, STAGATE incorporates an attention mechanism. This mechanism is calibrated to adaptively discern the similarity between adjacent spots, and the system can be further enhanced with a cell type-aware module, which is achieved by integrating gene expression profile pre-clustering. A significant feature of STAGATE is its adaptability to span multiple consecutive sections. This capability ensures a reduction in batch effects between sections and enables the extraction of three-dimensional (3D) expression domains from the reconstructed 3D tissue. Substantial tests across diverse ST data generation platforms, including 10x Visium, Slide-seq, and Stereo-seq, underscored STAGATE's superiority, especially for tasks like spatial domain identification, data visualization, spatial trajectory inference, data denoising, and 3D expression domain extraction.
Diving into its operational mechanics, STAGATE begins by creating a spatial neighbor network (SNN) grounded on the relative spatial positioning of spots. If needed, a cell type-aware SNN can be integrated by refining the SNN using pre-established gene expression clusters. This pre-clustering approach adeptly pinpoints areas with distinct cell types, enabling the modified SNN to finely map the spatial similarity at the boundaries of these unique spatial domains, a feature particularly vital for ST data with low spatial resolutions. Sequentially, STAGATE employs a graph attention auto-encoder to derive low-dimensional latent embeddings, combining both spatial metrics and gene expressions. Each spot's normalized expression undergoes a transformation into a d-dimensional latent embedding via an encoder, only to be reverted into a reconstructed expression profile through a decoder. Distinct from traditional auto-encoders, STAGATE introduces an attention mechanism within the middle layer of the encoder-decoder architecture. This innovation empowers STAGATE
to adaptively learn the edge weights of SNNs, refining the similarity metrics between neighboring spots. These weights are subsequently harnessed to refresh the spot representation by collectively aggregating data from its immediate neighborhood. The resultant latent embeddings set the stage for data visualization using tools like UMAP, and the identification of spatial domains is achieved through clustering algorithms, exemplified by mclust and Louvain.
In the field of DEG (Differentially Expressed Genes) identification Cellograph has been proposed [33]. It is a semi-supervised framework that leverages graph neural networks to assess the effects of perturbations at the single-cell level. Notably, Cellograph quantifies the typicality of individual cells in various conditions and constructs a latent space tailored for interpretive data visualization and clustering. A byproduct of its training process, the gene weight matrix, reveals crucial genes that differentiate between conditions. Grounded in Graph Convolutional Networks (GCN) - a graph counterpart to the conventional CNNs - Cellograph emphasizes node classification. It navigates through scRNA-seq data sourced from diverse conditions, perceiving individual cells as nodes. Through its two-layer GCN, Cellograph captures the latent embedding of single-cell data, identifying the representativeness of each cell relative to its baseline sample label.
This latent space fosters adept clustering, grouping cells with similar treatment responses and transcriptomic signatures. Additionally, this space can be conveniently projected into two dimensions for insightful visualizations. Benchmarking against existing methods has proven that, Cellograph excels in delineating perturbation effects and presents a distinctive GNN framework for single-cell data visualization and clustering. The methodological approach involves transforming single-cell data, gathered from varied drug treatments, into a kNN graph. In this structure, cells act as nodes, while edges connect transcriptionally similar cells. This kNN graph is then processed through a two-layer GCN, enabling Cellograph to both quantitatively and visually determine how each cell aligns with its experimental label, using latent embeddings and softmax probabilities. Once created, the latent space becomes compatible with k-means clustering techniques. As a final step, a gene-centric weight matrix is employed to spotlight genes that are instrumental in distinguishing between different conditions.
STitch3D [35] is computational framework designed to combine multiple 2D tissue slices, enabling the reconstruction of 3D cellular structures that span individual tissues to entire organisms. By merging models of these 2D tissue slices with cell-specific expression profiles procured from single-cell RNA sequencing, STitch3D effectively highlights 3D spatial regions characterized by consistent gene expression, while also unveiling the spatial distribution of various cell types. Notably, STitch3D can distinguish inherent biological variability across slices from potential batch effects, capitalizing on the common information between slices to construct robust 3D tissue models. It receives inputs as spatially-determined gene expression matrices from various ST slices, paired with specific cellular gene expression profiles from an associated scRNA-seq dataset. By analyzing these inputs together, STitch3D identifies 3D spatial domains marked by consistent gene expressions and discerns the spatial distribution of refined cell types. Key operations of STitch3D include configuring 3D spatial coordinates by aligning several tissue slices and generating a global 3D neighborhood graph. Building upon these inputs, STitch3D employs a deep learning model that ensures combination of information across tissue slices, subsequently aiding in the detection of 3D spatial regions and decomposition of cell types. One of STitch3D's primary mechanisms is its encoder network,
tailored to transform spatial spots from diverse slices into a common latent space. Following this, it deploys a neural network termed the deconvolution network to deduce cell-type proportions from these latent spatial representations. This specific network is optimized to replicate initial raw ST gene expressions by merging the anticipated cell-type ratios with the cellular gene expression profiles from the scRNA-seq reference.
Inputs to the model comprise of raw datasets from numerous ST tissue sections and cell-specific gene expression profiles from a reference scRNA-seq dataset. During the preprocessing phase, spots from various tissue slices are aligned to create 3D spatial coordinates, succeeded by the creation of a global 3D graph. The primary architecture in STitch3D integrates these structures to facilitate representation learning, focusing on the identification of 3D spatial territories and cell-type deconvolution. Outputs from STitch3D include the identified 3D spatial domains and spatial cell type distributions estimation in tissues. Additionally, it offers pathways for advanced analyses such as spatial trajectory prediction, refining of low-quality gene expression measurements, crafting virtual tissue slices, and recognizing genes marked by 3D spatial expression patterns. In its approach, STitch3D processes slices collectively and employs a graph attention-based neural network to extract latent representations of spots, integrating them with 3D spatial data. Augmenting its capabilities, STitch3D integrates a graph attention network, specifically designed to encompass the established 3D spatial graph. This network consists of a graph attention layer and a dense layer, with the attention layer processing inputs like a global adjacency matrix and normalized, log-transformed gene expressions. While the input dimensionality corresponds to a select number of highly variable genes--a metric that is dataset-dependent--the output dimensionality remains a constant 512.
PIKE-R2P [37] is a distinctive method designed for single-cell RNA to protein prediction. This approach integrates both protein-protein interactions (PPI) and prior knowledge embedding within a graph neural network framework. When provided with a sample of scRNA-seq data, PIKE-R2P predicts the abundance levels of various proteins. The model is chiefly divided into two components: a PPI-driven GNN and an embedding of prior knowledge.
The first component incorporates the PPI-based graph neural network into the dataset. These interactions offer a channel for information transfer among proteins, signifying the collective promotion of specific biological tasks, such as mutual inhibition or promotion. This is is achieved by encoding the PPIs within a graph structure, with nodes symbolizing proteins and edges depicting interactions. Consequently, the graph neural network calculates the outcome of this information exchange stemming from interactions amongst proteins. The secondary component focuses on embedding established knowledge, which encompasses factors like co-expression and gene co-occurrence. Given the conservation of PPI relationships across various cell types, databases such as STRING, with extensive PPI data, are harnessed for this knowledge embedding process.
In addition, scFEA [42], is a computational framework, that deduces single-cell fluxome from single-cell RNA-sequencing (scRNA-seq) data. Drawing strength from a meticulously restructured human metabolic map, segmented into focused metabolic modules, it integrates a unique probabilistic model. This model is fine-tuned to apply flux balance constraints on scRNA-seq data, paired with an advanced graph neural network-based optimization solver. The method seamlessly captures the transition from transcriptome to metabolome via multi-layer neural networks, ensuring the non-linear dependency between enzymatic
gene expressions and reaction rates is maintained. Key innovations within scFEA address prevailing challenges: a probabilistic model harnessed to apply the flux balance constraint across diverse metabolic fluxes for numerous single cells; a strategic approach to reducing metabolic map size, considering both network topology and gene expression status; a multi-layer neural network model designed to trace the dependency of metabolic flux on enzymatic gene expressions; and a graph neural network framework and solution, maximizing the overall flux balance of intermediary substrates across all cells.
Three pivotal computational components define scFEA: network reorganization, individual cell metabolic flux estimation, and a series of downstream analyses. These analyses encompass metabolic stress estimation, metabolic gene perturbation, and cell clustering based on varied metabolic states. The method transforms the entire metabolic network into a factor graph, comprising sets of metabolic modules and flux balance constraints. Network reduction occurs by examining network topology, metabolic gene expression status, and optional customized network regions. In the subsequent phase, each module's metabolic flux becomes a model, portraying a non-linear function of the enzyme expression levels within the module. This non-linearity is discerned by a 2-4 layer neural network. For neural network parameter resolution, scFEA introduces a flux balance constraint interlinking the modules of all individual cells in the tissue, all based on a probabilistic model. Finally, scFEA carries out comprehensive downstream analysis.
Moreover, GLUE (graph-linked unified embedding) [47] represents a modular framework designed for the seamless integration of unpaired single-cell multi-omics data while also inferring regulatory interactions. Through modeling of these regulatory interactions across omics layers, it adeptly accounts for the discrepancies among diverse omics-specific feature spaces, doing so in a manner that aligns with biological intuition. When tested on intricate tasks such as triple-omics integration, integrative regulatory inference, and the construction of a multi-omics human cell atlas spanning millions of cells, GLUE exhibited proficiency in rectifying prior annotations. Cell states within this framework are conceptualized as latent cell embeddings obtained via variational autoencoders. Due to the inherent disparities in their biological characteristics and assay methodologies, individual omics layers are each paired with a distinct autoencoder. This specific autoencoder employs a probabilistic generative model that is meticulously adapted to the feature space unique to that layer.
Within its architecture, GLUE employs omics-specific variational autoencoders to distill low-dimensional cell embeddings \(U1,U2,U3\) from every omics layer. While the data dimensionality and generative distribution may vary between layers, a consistent embedding dimension, denoted as m, is maintained. For the purpose of cohesively connecting the omics-specific data spaces, GLUE harnesses prior insights regarding regulatory interactions, encapsulated in a guidance graph \(G=(V,E)\). Here, the vertices \(V=V1\cup V2\cup V3\) signify omics features. A graph variational autoencoder is then tasked with deriving feature embeddings \(V=(V\tau 1,V\tau 2,V\tau 3)\) from the guidance graph. These feature embeddings are subsequently utilized by data decoders, which, through an inner product with cell embeddings, reconstruct omics data. This process adeptly bridges omics-specific data spaces, ensuring a harmonized embedding orientation. Concluding this intricate process, an omics discriminator D is employed to synchronize the cell embeddings across the various omics layers, utilizing adversarial learning techniques.
Furthermore, The single-cell Graph Convolutional Network (scGCN) [49] founded on a graph-based artificial intelligence framework has been introduced to enable dependable and replicable integration as well as proficient knowledge transfer across varied, distinct datasets. When evaluated against multiple label transfer methodologies using an expansive collection of 30 single-cell omics datasets, spanning diverse tissues, species, sequencing platforms, and molecular strata such as RNA-seq and ATAC-seq, scGCN has consistently exhibited enhanced accuracy.
In its operational framework, scGCN initially learns a hybrid, sparse graph containing both inter-dataset and intra-dataset cell mappings. This is achieved by utilizing mutual nearest neighbors of canonical correlation vectors, which coherently project the diverse datasets into a common low-dimensional space. This methodical approach fosters identification and dissemination of mutual information between reference and query datasets. Following the graph's creation, a semi-supervised GCN is employed to align cells from both the reference and subsequent datasets onto a common latent space. This ensures that cells sharing labels are located within a homogenous cluster. As a result, labels within the query dataset are effectively ascertained and assimilated from the reference dataset.
GrID-Net [51] is a neural network based on the time-tested Granger causal inference, promoting graph-based evaluations over conventional sequential analyses. This algorithm has seen applications in the domain of human corticogenesis single-cell studies, providing predictions for neuronal cis-regulatory elements and facilitating the interpretation of genetic variants related to schizophrenia (SCZ). It draws upon the econometric principle of predictive temporal causality, where tissue-specific locus-gene associations are identified, whereby the accessibility of a given locus acts as a precursor to the gene's expression. The underlying time disparity between chromatin accessibility and gene expression culminates in the "cell-state parallax". This concept captures the temporal offset between epigenetic and transcriptional events stemming from their intrinsic cause-and-effect dynamics. During evolving biological processes, the chromatin regulatory potential stands as a tangible representation of this parallax, suggesting the potential to discern such phenomena from single-cell snapshots.
Extracting insights from this parallax through single-cell instances aids in the derivation of causal connections between loci and genes. GrID-Net, when applied to bimodal single-cell datasets--co-assay both chromatin accessibility (ATAC-seq) and gene expression (RNA-seq)--seeks to delineate the associations between noncoding regions (or peaks) and the genes they regulate. Notably, GrID-Net can extend beyond traditional Granger causality to incorporate DAG-based renderings of single-cell trajectories, anchored by its core mechanism: a graph neural network that adopts a lagged message propagation structure. GrID-Net, thus, is adept at identifying asynchronous dynamics between regulatory activities at accessible noncoding sites and their subsequent influence on gene expression. Furthermore, the peak-gene associations discerned through GrID-Net assist in the functional dissection of genetic variations linked with pathological conditions.
For ligand-receptor identification, GCNG (Graph Convolutional Neural Networks for Genes) [52] has been proposed. It is a model that adeptly translates spatial attributes into a graph representation, subsequently integrating it with expression datasets through supervised learning. Not only does GCNG enhance spatial transcriptomics data analysis methodologies, but it also identifies novel extracellular gene interaction pairs. Furthermore, the outcomes generated by GCNG are primed for subsequent analytical tasks, such as functional gene
assignment. In order to predict extracellular interactions derived from gene expression using GCN, spatial transcriptomics data is initially transmuted into a graph that represents the interactions between cells. Following this, each gene pair's expression is encoded and processed by GCNG, blending the graph data with the expression datasets through convolution. This methodology empowers the neural network to harness not only first-order relationships but also more intricate higher-order interactions inherent in the graph's architecture.
GCNG focuses on deducing gene interactions pivotal for intercellular communications from spatial single-cell expression datasets. The model receives two inputs: the cellular locations in spatial images and the gene pair expressions within these cells. The initial process within GCNG encompasses the conversion of single-cell spatial expression data into two distinct matrices. The first matrix captures cellular positions in the form of a neighborhood graph, while the second matrix encodes gene expressions within individual cells. These matrices collectively serve as the foundational input for a five-layered graph convolutional neural network, centered on the prediction of intercellular gene communication relationships.
An instrumental component of the GCN architecture is the graph convolutional layer, facilitating the combination of the graph's structure (spanning cell locations and neighboring relations) with node details (pertaining to gene expression in a designated cell). Given that the graph structure associates spatially adjacent cells, GCNs can draw upon the convolutional layers without requiring any direct image data. To delve deeper into the GCNG structure, it encompasses two graph convolutional layers, a singular flatten layer, a dense layer with 512 dimensions, and a classification-centric sigmoid function output layer. The inclusion of a dual convolutional layer set-up equips the system to comprehend indirect graph relationships. Recognizing that regulatory proteins can yield influence beyond their immediate neighborhood, this methodology enables the inference of interactions potentially overlooked when merely accounting for adjacent entities.
Another cutting edge GNN-based tool for single-cell analysis is StdGCN [53]. StdGCN is an advanced graph neural network model meticulously designed for the deconvolution of cell types within spatial transcriptomic (ST) data. This model harnesses the computational prowess of graph convolutional networks (GCN), a deep learning model that predominantly employs graph-based architecture. It adeptly utilizes the rich reservoir of single-cell RNA sequencing (scRNA-seq) data as a reference point. Notably, STdGCN pioneers the integration of expression profiles derived from single-cell datasets with the spatial positioning details procured from ST data, enhancing the precision of cell type deconvolution.
The operation of STdGCN begins with the identification of marker genes distinctive to particular cell types, coupled with the generation of pseudo-spots, drawing upon the scRNA-seq datasets. Sequentially, the model creates two linking graphs to represent the GCN workflow. The first link graph termed the expression graph, is a composite entity, composed of three sub-graphs: an internal pseudo-spots graph, an internal real-spots graph, and a graph bridging real to pseudo-spots. The genesis of each sub-graph is firmly rooted in mutual nearest neighbors (MNN), harnessing the expression-based similarities between spots. The second link graph, termed the spatial graph, is created from the Euclidean distances discerned between real-spots within ST datasets. During StdGCN's operation, the input feature matrix is propagated through both the expression GCN layers as well as it's spatial layers. The resultant outputs, Exp-feature and Spa-feature, are subsequently concatenated column-wise, into a unified matrix. This matrix then undergoes processing through fully-connected layer,
yielding predictions pertaining to cellular type proportions for each specific spot. For model training, pseudo-spots undergo a division into training and validation subsets. It is imperative to note that only the training subset of pseudo-spots is utilized for back propagation, while the validation subset is strategically reserved for early-stopping protocols. Adopting this methodology, real-spot cellular type proportions are updated through the GCN framework, further enhancing the learning process of the pseudo-spots.
Last but not least, SCAN-IT [54] has been proposed to effectively translate the spatial domain identification problem into an image segmentation problem. In this process, cells emulate pixels, while the expression values of genes encapsulated within a cell take on the role of color channels. The foundation of SCAN-IT is anchored in geometric modeling, graph neural networks, and an informatics strategy known as DeepGraphInfomax. Notably, SCAN-IT exhibits remarkable versatility, accommodating datasets spanning an array of spatial transcriptomics methodologies. This includes datasets characterized by heightened spatial resolution but low gene coverage and those marked by reduced spatial resolution but augmented gene coverage.
Delving into the intricacies of tissue segmentation, deep graph neural networks are harnessed, culminating in a reimagining of the process as a clustering-oriented image segmentation task. Within this paradigm, spatial transcriptomics (ST) data is envisioned as an image distinguished by irregular grid structures and thousands of channels. SCAN-IT, a sophisticated deep learning algorithm, is introduced to impartially discern tissue domains from such images. In its initial phase, SCAN-IT meticulously constructs a geometry-aware spatial proximity graph, with the nodes representing either the spots (cells groups) or the individual cells present in the ST dataset, employing the alpha complex methodology. When compared to the conventionally employed k-nearest neighbor graphs, this unique graph representation offers a more nuanced reflection of the physical closeness between cells.
**Fig. 2**: Line-plots depicting the number of papers published from 2018 to 2022. The terms used for the first search were scRNA-seq and Graph Neural Network. The terms used for the second search were scRNA-seq and Network Medicine. Through these plots the increasing popularity of network medicine and graph neural network frameworks for single-cell data analysis is highlighted. The term search took place through the use of google scholar.
Adapting the DeepGraphInfomax, low-dimensional node embeddings are produced. Intriguingly, these embeddings encapsulate gene expression trajectories inherent to the cells, as well as the gene expression pathways manifest in their immediate microenvironment. This encapsulation is facilitated by the inherent attributes of DeepGraphInfomax which prioritize neighborhood information conservation. The culmination of this process witnesses the transformed low-dimensional data being channelled into established clustering algorithms, facilitating the delineation of tissue domains.
## 3 Discussion & Future Perspectives
Remarkably, in the last four years, 39 GNN tools tailored for single-cell data have been recorded, with almost all published in high-impact journals. It should be noted that 16 out of the 39 tools make use of attention mechanisms, a trend that underlines the increasing potential and adaptability of such models in capturing intricate relationships within single-cell data [16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 29, 30, 31, 35, 36, 44]. Moreover, the majority of these tools, with few exceptions -only 6 to be precise- rely on single-cell RNA-sequencing expression data as their primary input, highlighting the diverse array of single-cell data forms gaining traction [25, 26, 36, 48, 52, 54].
The surging interest in GNNs for single-cell omics data analysis is unsurprising given their inherent dual strengths. Firstly, their foundation on deep neural networks enables such models to inherently tackle the challenges posed by big data [66]. As single-cell datasets continue to grow both in size and complexity, such capabilities become indispensable. Secondly, GNNs by nature operate on graphs, an unrivaled advantage in capturing complex relationships and interconnectivities between entities. With the expanding heterogeneity of omics data, such as the growing popularity of single-cell ATAC and spatial data which provide crucial insights to understanding the complexities of intercellular communication present in both healthy and pathological tissues through precise localization of gene clusters expressed in specific cell subpopulations as identified through scRNA-seq [67] the importance of GNNs becomes even more pronounced. Thus not only are they enable to integrate multi-omics data but also to preserve the intricate relationships between cells and genes, a task made feasible primarily through the use of graphs.
Diving deeper into the evolution of the GNNs, it's evident that innovation remains relentless. Since their inception, GNNs have been continually refined and augmented. Graph Convolutional Neural Networks (GCNs) [8] represented a monumental stride, which was later complemented and extended by the introduction of Graph Attention Networks [12]. A fairly recent breakthrough in this lineage is the introduction of GATV2 [68]. GATv2 emerges as a refined iteration of Graph Attention Networks, addressing the constraints of static attention observed in its predecessors. This notion of static attention is typified by a homogenized attention ranking for pivotal nodes across all querying nodes. GATv2 introduces an innovative dynamic attention dimension, which endows the model with the agility to modulate attention weightings contingent on the query node in question. This dynamism is achieved via a recalibrated attention coefficient derivation mechanism, wherein node embeddings are synergistically concatenated, subsequently undergoing a non-linear Leaky ReLU activation function. The emergent output engages in a dot product interaction with a mutable weight vector. This strategic shift enables GATv2 to adeptly map the graph's anatomy
and accentuate pivotal node dialogues by endowing it with a dynamic attention weighting mechanism.
Summarizing, we estimate that the revolutionary potential of GNNs in the field of single-cell omics is undeniable. Their inherent capabilities, combined with the ever-evolving architectures, paint a promising picture for the future. Beyond the evident applications, there lies the fascinating prospect of GNNs paving the way for groundbreaking advancements in drug repurposing as well as precision and network medicine. In fact, as highlighted in **Fig.2**, there has been a notable surge in publications over the past 5 years that combine scRNA-seq with either GNNs or the broader field of network medicine. By doing so, GNNs have the potential to spearhead the development of innovative diagnostic tools and therapeutic strategies for a plethora of complex diseases, further solidifying their central role in the field of biomedicine.
## Declarations
### Conflict of Interest
The authors have no competing interests to declare that are relevant to the content of this article.
### Data availability
No datasets were generated or analysed during the current study.
|
2306.08827 | PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks
for Solving PDEs | While significant progress has been made on Physics-Informed Neural Networks
(PINNs), a comprehensive comparison of these methods across a wide range of
Partial Differential Equations (PDEs) is still lacking. This study introduces
PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a
diverse dataset, comprising over 20 distinct PDEs from various domains,
including heat conduction, fluid dynamics, biology, and electromagnetics. These
PDEs encapsulate key challenges inherent to real-world problems, such as
complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality.
PINNacle also offers a user-friendly toolbox, incorporating about 10
state-of-the-art PINN methods for systematic evaluation and comparison. We have
conducted extensive experiments with these methods, offering insights into
their strengths and weaknesses. In addition to providing a standardized means
of assessing performance, PINNacle also offers an in-depth analysis to guide
future research, particularly in areas such as domain decomposition methods and
loss reweighting for handling multi-scale problems and complex geometry. To the
best of our knowledge, it is the largest benchmark with a diverse and
comprehensive evaluation that will undoubtedly foster further research in
PINNs. | Zhongkai Hao, Jiachen Yao, Chang Su, Hang Su, Ziao Wang, Fanzhi Lu, Zeyu Xia, Yichi Zhang, Songming Liu, Lu Lu, Jun Zhu | 2023-06-15T02:49:05Z | http://arxiv.org/abs/2306.08827v2 | # PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs
###### Abstract
While significant progress has been made on Physics-Informed Neural Networks (PINNs), a comprehensive comparison of these methods across a wide range of Partial Differential Equations (PDEs) is still lacking. This study introduces PINNacle, a benchmarking tool designed to fill this gap. PINNacle provides a diverse dataset, comprising over 20 distinct PDEs from various domains including heat conduction, fluid dynamics, biology, and electromagnetics. These PDEs encapsulate key challenges inherent to real-world problems, such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality. PINNacle also offers a user-friendly toolbox, incorporating about 10 state-of-the-art PINN methods for systematic evaluation and comparison. We have conducted extensive experiments with these methods, offering insights into their strengths and weaknesses. In addition to providing a standardized means of assessing performance, PINNacle also offers an in-depth analysis to guide future research, particularly in areas such as domain decomposition methods and loss reweighting for handling multi-scale problems and complex geometry. While PINNacle does not guarantee success in all real-world scenarios, it represents a significant contribution to the field by offering a robust, diverse, and comprehensive benchmark suite that will undoubtedly foster further research and development in PINNs.
## 1 Introduction
Partial Differential Equations (PDEs) are of paramount importance in science and engineering, as they often underpin our understanding of intricate physical systems such as fluid flow, heat transfer, and stress distribution [28]. The computational simulation of PDE systems has been a focal point of research for an extensive period, leading to the development of numerical methods such as finite difference [6], finite element [35], and finite volume methods [11]. Recent advancements have led to the use of deep neural networks to solve forward and inverse problems involving PDEs [34; 47; 10; 42]. Among these, Physics-Informed Neural Networks (PINNs) have emerged as a promising alternative to traditional numerical methods in solving such problems [34; 20]. PINNs leverage the underlying physical laws and available data to effectively handle various scientific and engineering applications. The growing interest in this field has spurred the development of numerous PINN variants, each tailored to overcome specific challenges or to enhance the performance of the original framework.
While PINN methods have achieved remarkable progress, a comprehensive comparison of these methods across diverse types of PDEs is currently lacking. Establishing such a benchmark is
crucial as it could enable researchers to more thoroughly understand existing methods and pinpoint potential challenges. Despite the availability of several studies comparing sampling methods [45] and reweighting methods [2], there has been no concerted effort to develop a comprehensive and rigorous benchmark using realistic and challenging datasets. The sheer variety and inherent complexity of PDEs make it difficult to conduct a comprehensive analysis. Moreover, different mathematical properties and application scenarios further complicate the task, requiring the benchmark to be adaptable and exhaustive.
To resolve these challenges, we propose PINNacle, a comprehensive benchmark for evaluating and understanding the performance of PINNs. As shown in Fig. 1, PINNacle consists of three major components -- a diverse dataset, a toolbox, and evaluation modules. The dataset comprises tasks from over 20 different PDEs from various domains, including heat conduction, fluid dynamics, biology, and electromagnetics. Each task brings its own set of challenges such as complex geometry, multi-scale phenomena, nonlinearity, and high dimensionality, thus providing a rich testing ground for PINNs. The toolbox incorporates more than 10 state-of-the-art (SOTA) PINN methods, enabling a systematic comparison of different strategies including loss reweighting, variational formulation, adaptive activations, and domain decomposition. These methods can be flexibly applied to the tasks in the dataset, offering researchers a convenient way to evaluate the performance of PINNs and gain insights into their suitability for different problems. The evaluation modules provide a standardized means of assessing the performance of different PINN methods across all tasks, ensuring consistency in comparison and facilitating the identification of strengths and weaknesses in various methods.
PINNacle provides a robust, diverse, and comprehensive benchmark suite for PINNs, contributing significantly to the field's understanding and application. Despite its comprehensiveness, we remind users that success within this benchmark does not necessarily guarantee success in the more complex real-world scenarios. Nonetheless, PINNacle represents a major step forward in the evolution of PINNs, fostering more innovative research and development in this exciting field. In a nutshell, our contributions can be summarized as follows:
* We introduce PINNacle, a comprehensive benchmark suite offering a diverse dataset encompassing over 20 challenging PDE problems. These problems encapsulate several critical challenges faced by PINNs, including handling complex geometries, multi-scale phenomena, nonlinearity, and high-dimensional problems.
* We provide a user-friendly toolbox, facilitating the implementation and comparison of diverse PINN methods. We have carefully selected 10 representative variants of PINNs and conducted thorough experiments to evaluate their performance.
* We provide an in-depth analysis to guide future research. We show using loss reweighting and domain decomposition methods could improve the performance on multi-scale and
Figure 1: Architecture of PINNacle. It contains a dataset covering more than 20 PDEs, a toolbox that implements about 10 SOTA methods, and an evaluation module. We implement the toolbox based on DeepXDE. These methods have a wide range of application scenarios like fluid mechanics, electromagnetism, heat conduction, geophysics, and so on.
complex geometry problems. Variational formulation achieves better performance on inverse problems. However, few methods can adequately address nonlinear problems, indicating a future direction for exploration and advancement.
## 2 Related Work
### Benchmarks in scientific machine learning
Scientific benchmarks and datasets substantially differ from those in domains such as computer vision (CV) and natural language processing (NLP) due to their high domain specificity. Their data formats, sizes, and governing principles can dramatically vary across different scientific disciplines. The growing trend of AI applications in scientific research has stimulated the development of various benchmarks and datasets. For instance, [25] presents a benchmark for comparing neural operators, while others have proposed benchmarks to assess methods for learning latent Newtonian mechanics [3; 31]. Furthermore, domain-specific datasets and benchmarks exist in fluid mechanics [15], climate science [32; 5], quantum chemistry [1], and biology [4].
Despite the existence of these domain-specific datasets and benchmarks, physics-informed machine learning has received considerable attention [13; 8] since the advent of Physics-Informed Neural Networks (PINNs) [34]. These methods successfully incorporate physical laws into model training, thereby demonstrating their immense potential across a variety of scientific and engineering domains. Various studies have compared different components within the PINN framework; for instance, [9] and [45] investigate the sampling methods of collocation points in PINNs, and [2] compare reweighting techniques for different loss components. [41] design multiple tasks to compare different methods in scientific machine learning such as PINNs, FNO, and U-Net. Nevertheless, a comprehensive comparison of various PINN approaches remains absent in the literature.
### Softwares and Toolboxes
A plethora of software solutions have been developed for solving PDEs with neural networks. These include SimNet [14], NeuralPDE [33], TorchDiffEq [7], and PyDEns [23]. More recently, DeepXDE [26] has been introduced as a versatile toolbox for implementing PINNs across different backends. However, there remains a void for a toolbox that is both comprehensive and user-friendly for PINNs. Our PINNacle aims to fill this gap, offering a modular structure that facilitates the implementation and comparison of diverse PINN variants. We furnish clear and concise code for researchers to execute benchmarks across all problems and methods.
## 3 PINNacle: A Modularized Library for Benchmarking PINNs
In this section, we first introduce the preliminaries of PINNs. Then we introduce the details of datasets (tasks), PINN methods, the toolbox framework, and the evaluation metrics.
### Preliminaries of Physics-informed Neural Networks
Physics-informed neural networks are neural network-based methods for solving PDEs as well as inverse problems of PDEs which receives much attention recently. Specifically, let's consider a general Partial Differential Equation (PDE) system defined on \(\Omega\), which can be represented as:
\[\mathcal{F}(u(x);x) = 0,\quad x\in\Omega, \tag{1}\] \[\mathcal{B}(u(x);x) = 0,\quad x\in\partial\Omega. \tag{2}\]
where \(\mathcal{F}\) is a differential operator and \(\mathcal{B}\) is the boundary/initial condition. PINN uses a neural network \(u_{\theta}(x)\) with parameters \(\theta\) to approximate \(u(x)\). The objective of PINN is to minimize the following loss function:
\[\mathcal{L}(\theta)=\frac{w_{c}}{N_{c}}\sum_{i=1}^{N_{c}}||\mathcal{F}(u_{ \theta}(x_{c}^{i});x_{c}^{i})||^{2}+\frac{w_{b}}{N_{b}}\sum_{i=1}^{N_{b}}|| \mathcal{B}(u_{\theta}(x_{b}^{i});x_{b}^{i})||^{2}+\frac{w_{d}}{N_{d}}\sum_{i =1}^{N_{d}}||u_{\theta}(x_{d}^{i})-u(x_{d}^{i})||^{2}. \tag{3}\]
where \(w_{c},w_{b},w_{d}\) are weights. The first two terms enforce the PDE constraints on \(\{x_{c}^{i}\}_{1\ldots N_{c}}\) and boundary conditions on \(\{x_{b}^{i}\}_{1\ldots N_{b}}\). The last term is data loss, which is optional when there is data
available. However, PINNs have several inherent drawbacks. First, PINNs optimize a mixture of imbalance loss terms which might hinder its convergence. Second, the high-order derivatives or nonlinear PDEs might lead to unstable optimization. Third, the vanilla MLPs might have difficulty in representing multi-scale or high-dimensional functions. To resolve these challenges, numerous variants of PINNs are proposed. However, a comprehensive comparison of these methods is lacking and thus it is imperative to develop a benchmark.
### Datasets
In order to conduct a thorough comparison of various PINN variants, it is crucial to assemble a collection of PDE problems (datasets) that encapsulate a broad spectrum of challenges. This strategy aims to highlight the strengths and limitations of different methods. Specifically, we select PDEs from multiple domains, taking into account their significance in the sciences and engineering. Notably, the dataset comprises 22 distinct cases and more details can be found in the Appendix B.
* The **Burgers' Equation**, fundamental to fluid mechanics, with both one and two-dimensional problems under consideration.
* The **Poisson's Equation**, widely used in math and physics, and we use four different cases.
* The **Heat Equation**, a time-dependent PDE that describes diffusion or heat conduction, demonstrated in four unique cases.
* The **Navier-Stokes Equation**, describing the motion of viscous fluid substances, showcased in three scenarios: a lid-driven flow (NS2d-C), a geometrically complex backward step flow (NS2d-CG), and a time-dependent problem (NS2d-LT).
* The **Wave Equation**, modeling wave behavior, exhibited in three cases.
* **Chaotic PDEs**, featuring two popular examples: the Gray-Scott (GS) and Kuramoto-Sivashinsky (KS) equations.
* **High Dimensional PDEs**, including the high-dimensional Poisson equation (PNd) and the high-dimensional diffusion or heat equation (HNd).
* **Inverse Problems**, focusing on the reconstruction of the coefficient field from noisy data for the Poisson equation (PINnv) and the diffusion equation (HInv).
It is important to note that we have chosen PDEs encompassing a wide range of mathematical properties. This ensures that the benchmarks do not favor a specific type of PDE. The selected PDE problems introduce several core challenges, which include:
* **Complex Geometry**: Many PDE problems involve complex or irregular geometry, such as heat conduction or wave propagation around obstacles. These complexities pose significant challenges for PINNs in terms of accurate boundary behavior representation.
* **Multi-Scale Phenomena**: Multi-scale phenomena, where the solution varies significantly over different scales, are prevalent in situations such as turbulent fluid flow. Achieving a balanced representation across all scales is a challenge for PINNs in multi-scale scenarios.
* **Nonlinear Behavior**: Many PDEs exhibit nonlinear or even chaotic behavior, where minor variations in initial conditions can lead to substantial divergence in outcomes. The optimization of PINNs becomes intriguing on nonlinear PDEs.
* **High Dimensionality**: High-dimensional PDE problems, frequently encountered in quantum mechanics, present significant challenges for PINNs due to the "curse of dimensionality". This term refers to the increase in computational complexity with the addition of each dimension, accompanied by statistical issues like data sparsity in high-dimensional space.
These challenges have been selected due to their frequent occurrence in numerous real-world applications. As such, a method's performance in addressing these challenges serves as a reliable indicator of its overall practical utility. Table 1 presents a detailed overview of the dataset, the PDEs and the challenges associated with these problems.
### Methods and Toolbox
After conducting an extensive literature review, we present an overview of diverse PINNs approaches for comparison. Then we present the high-level structure of the toolbox of PINNacle.
#### 3.3.1 Methods
These methods are classified into several categories, providing a comprehensive landscape of PINN methods. Roughly they are mainly based on loss functions, architecture, and optimizer [13]. The modifications to loss functions are abundant and diverse, which can be divided into reweighting existing losses and developing novel loss functions like regularization and variational formulation. Variants of architectures include using domain decomposition and adaptive activations. We select variants from these categories that have demonstrated a significant impact in the field.
These methods are tightly linked to the challenges outlined in our datasets. For instance, domain decomposition methods excel in addressing problems with complex geometries and multi-scale phenomena, while loss reweighting strategies deal with the balance for problems with multiple losses. Here, we list the primary categories and representative methods:
* **Loss reweighting/Resampling (2\(\sim\)4)**: PINNs are trained with a mixed loss of PDE residuals, boundary conditions and available data losses. Various methods [43, 44, 2, 27, 37] propose different strategies to balance these weights or resample collocation points, which indirectly adjust the weights [45, 30]. We choose three famous examples, i.e., reweighting using gradient norms (PINN-LRA) [43], using neural tangent kernel (PINN-NTK) [44], and residual-based resampling (RAR)[26, 45].
* **Novel optimizer (5)**: To handle the problem of multi-scale objectives, some new optimizers [24, 46] are proposed. We choose MultiAdam which is resistant to the domain scale changes.
* **Novel loss functions (6\(\sim\)7)**: Some works introduce novel loss functions like variational formulation [39, 22, 21] and regularization terms to improve training. We choose hp-VPINN [21] and gPINN [48, 40] which are representative examples from these two categories.
* **Novel activation architectures (8\(\sim\)10)**: Some works propose various network architectures, such as using CNN and LSTM [49, 12, 36], custom activation functions [17, 18], and domain decomposition [16, 38, 19, 29]. Among adaptive activations, we choose LAAF [17], GAAF [18] which are adaptive activation functions for PINNs and FBPINNs [29] which is a famous domain decomposition method.
#### 3.3.2 Structure of Toolbox
Here we briefly introduce the design logic and ideas of the code base of PINNacle.
\begin{table}
\begin{tabular}{c|c c c c} \hline Dataset & Complex geometry & Multi-scale & Nonlinearity & High dim \\ \hline Burgers1\(\sim\)2 & \(\times\) & \(\times\) & \(\surd\) & \(\times\) \\ Poisson3\(\sim\)6 & \(\times\)/\(\surd\) & \(\times\)/\(\surd\) & \(\times\) & \(\times\) \\ Heat7\(\sim\)10 & \(\times\)/\(\surd\) & \(\times\)/\(\surd\) & \(\times\) & \(\times\) \\ NS1\(\sim\)13 & \(\times\)/\(\surd\) & \(\times\)/\(\surd\) & \(\surd\) & \(\times\) \\ Wave14\(\sim\)16 & \(\times\)/\(\surd\) & \(\times\)/\(\surd\) & \(\times\) & \(\times\) \\ Chaotic17\(\sim\)10 & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ High dim19\(\sim\)20 & \(\times\) & \(\times\) & \(\times\) & \(\surd\) \\ Inverse 21\(\sim\)22 & \(\times\) & \(\times\) & \(\surd\) & \(\times\) \\ \hline \end{tabular}
\end{table}
Table 1: Overview of our datasets along with their challenges. We choose 22 cases in total to evaluate the methods of PINNs. The left picture shows the visualization of cases with these four challenges, i.e., complex geometry, multi-scale, nonlinearity, and high dimension.
\begin{table}
\begin{tabular}{c|c c c c} \hline & Complex Geometry & Multi-scale & Nonlinearity & High dim \\ \hline Vanilla PINN [34] & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Reweighting/Resampling2\(\sim\)4[43, 44, 45] & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ Novel Optimizer7[46] & \(\times\) & \(\surd\) & \(\times\) & \(\times\) \\ Novel Loss Functions5\(\sim\)6[21, 48] & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Novel Architecture8\(\sim\)10[29] & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ \hline \end{tabular}
\end{table}
Table 2: Overview of methods in our PINNacle. \(\surd\) denotes the method is potentially designed to solve or show empirical improvements for problems encountering the challenge and vice versa.
**High-level Training and Custom PDE Development.** Our codebase is based on DeepXDE and provides a series of encapsulated classes and functions to facilitate high-level training and custom PDEs. These utilities allow for a standardized and streamlined approach to the implementation of various PINN variants and PDEs. We could easily build custom PDEs or use our pre-defined PDEs from "BasePDE" class. We wrap some APIs in DeepXDE to enable a more flexible call. We assemble a Model class using a PDE, Net, and Optimizer. Then we compile it using DeepXDE and feed it to the trainer with our custom-defined callbacks. Moreover, to enable easier training and testing across a variety of equation types and dimensions, we have provided many auxiliary functions. These include functions for computing different metrics, visualizing predictions, and recording results.
**Adaptive Multi-GPU Parallel Training Pipeline.** To further enhance the efficiency of systematic evaluations of PINN methods, we have integrated an adaptive multi-GPU parallel training framework. It not only significantly increases computational speed but also allows for the execution of larger and more complex tasks. It addresses the parallelization phase of training on multiple tasks, effectively balancing the computational loads of multiple GPUs. In a nutshell, we provide an example code for training and evaluating PINNs on two Poisson equations using our PINNacle framework.
```
importdeepxdesadde fromtrainerimportTrain fromsrc.pdepimportPDE1,...,PDEn fromsrc.utils.callbacksimportTesterCallback trainer=Trainer('experiment-name',device) forpde_classin[PDE1,...,PDEn]: defget_model(): pde_pde_class() net=dde.nn.FNN([pde.input_dim]+n_layers*[n_hidden]+[pde.output_dim]) opt=torch.optim.Adam(net.parameters(),lr=learning_rate) model=pde.create_model(net) model.compile(opt) returnmodel trainer.add_task( get_model,{'iterations';num_iterations,'callbacks':[TesterCallback()]} trainer.train_all_parallel()
```
### Evaluation
To comprehensively analyze the discrepancy between the PINN solutions and the true solutions, we adopt multiple metrics to evaluate the performance of the PINN variants. Generally, we choose several metrics which are commonly used in literature that applies to all methods and problems. Specifically, we use Mean Square Error (MSE), \(\ell_{2}\) relative error (\(\mathrm{L2RE}\)), and \(\ell_{1}\) relative error (\(\mathrm{L1RE}\)) to measure the quality of the solution. These three metrics are the most widely used ones to measure the quality of the solution. We suppose that \(\mathbf{y}=(y_{i})_{i=1}^{n}\) is the prediction and \(\mathbf{y}^{\prime}=(y_{i}^{\prime})_{i=1}^{n}\) to is ground truth, where \(1\leqslant i\leqslant n\), and \(n\) is the number of testing examples. These three metrics are computed as follows:
\[\mathrm{MSE}=\frac{1}{n}\sum_{i=1}^{n}(y_{i}-y_{i}^{\prime})^{2},\ \mathrm{L2RE}= \sqrt{\frac{\sum_{i=1}^{n}(y_{i}-y_{i}^{\prime})^{2}}{\sum_{i=1}^{n}{y_{i}^{ \prime}}^{2}}},\ \mathrm{L1RE}=\frac{\sum_{i=1}^{n}|y_{i}-y_{i}^{\prime}|}{\sum_{i=1}^{n}|y_{ i}^{\prime}|}. \tag{4}\]
Besides, for time-dependent problems, investigating the quality of the solution along with time is an important metric. Therefore we compute the error varying with time,
\[\mathrm{TL2RE}(t)=\sqrt{\frac{\sum_{i=1}^{n}(y_{i}(t)-y_{i}^{\prime}(t))^{2}}{ \sum_{i=1}^{n}{y_{i}^{\prime}(t)}^{2}}}. \tag{5}\]
We assess the performance of PINNs against the reference from numerical solvers. Experimental results utilizing the \(\ell_{2}\) relative error (\(\mathrm{L2RE}\)) metric are incorporated within the main text, while a more exhaustive set of results, based on the aforementioned metrics, is available in the Appendix E.1.
## 4 Experiments
In this section, we present our experimental results. Unless otherwise stated, we use 0.001 as the learning rate and we train all models with 20,000 epochs. We repeat all experiments for 3 times and record the mean and std.
### Main Experimental Results
We present the main experimental results for each selected baseline on our tasks and display their average \(\ell_{2}\) relative errors in Table 3 (with standard deviation results available in Appendix E.1).
**PINN.** We use PINN-w to denote training PINNs with larger boundary weights. Vanilla PINNs struggle to accurately solve complex physics systems, indicating substantial room for improvement across most tasks in our benchmark. Using an \(\ell_{2}\) relative error (L2RE) of \(10\%\) as a threshold for a successful solution, we find that the two standard PINN configurations only solve 9 out of 22 tasks, most of which involve simpler equations (e.g., \(1.45\%\) on Burgers-1d-C). They encounter significant difficulties when faced with physics systems characterized by complex geometries, multi-scale phenomena, nonlinearity, and longer time spans. This can be attributed to PINNs directly optimizing a weighted average of the PDE residual losses and initial/boundary condition losses using an MLP, which leads to critical issues such as imbalance between multiple optimization objectives, suboptimal convergence and generalization, and limited expressiveness.
**PINN variants.** PINN variants offer different approaches to addressing some of these challenges to varying degrees. Methods involving loss reweighting and resampling have shown improved performance in some cases involving complex geometries and multi-scale phenomena (e.g., \(1.43\%\) on Poisson-2d-CG and \(4.40\%\) on Heat-2d-MS for NTK). This is due to the configuration of loss weights and sampled collocation points, which adaptively places more weight on more challenging domains during the training process. However, these methods still struggle with Wave equations, Navier-Stokes equations, and other cases with higher dimensions or longer time spans. MultiAdam, a representative of novel optimizer methods, solves several simple cases and the chaotic GS equation (\(9.37\%\)), but does not significantly outperform other methods. The new loss term demonstrates significant superiority in solving inverse problems (_e.g.,_\(1.19\%\) on HInv for vPINN), but no clear improvement in fitting error over standard PINN in forward cases. Changes in architecture can enhance expressiveness and flexibility for cases with complex geometries and multi-scale systems. For example, FBPINN achieves the smallest error on the chaotic GS equation (\(7.99\%\)), while LAAF delivers the best fitting result on Heat-2d-CG (\(2.39\%\)).
**Discussion.** For challenges related to complex geometries and multi-scale phenomena, some methods can mitigate these issues by implementing mechanisms like loss reweighting, point resampling, novel
\begin{table}
\begin{tabular}{c c|c c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{L2RE} & \multirow{2}{*}{Name} & \multicolumn{2}{c|}{Vanilla} & \multicolumn{2}{c|}{Loss Reweighting/Sampling} & \multicolumn{2}{c|}{Optimizer} & \multicolumn{2}{c|}{Loss functions} & \multicolumn{2}{c}{Architecture} \\ \cline{3-13} & & PINN & PINN-w & LRA & NTK & RAR & MultiAdam & gPINN & vPINN & LAAF & GAAF & FBPINN \\ \hline \multirow{2}{*}{Burgers} & 1d-C & 1.45E-2 & 2.63E-2 & 2.61E-2 & 1.84E-2 & 3.32E-2 & 4.85E-2 & 2.16E-1 & 3.47E-1 & **1.43E-2** & 5.20E-2 & 2.32E-1 \\ & 2d-C & 3.24E-1 & 2.70E-1 & **2.60E-1** & 2.75E-1 & 3.45E-1 & 3.33E-1 & 3.27E-1 & 6.38E-1 & 2.77E-1 & 2.95E-1 & – \\ \hline \multirow{4}{*}{Poisson} & 2d-C & 6.94E-1 & 3.49E-2 & 1.17E-1 & **1.32E-2** & 6.99E-1 & 2.63E-2 & 6.87E-1 & 4.91E-1 & -7.68E-1 & 6.04E-1 & 4.49E-2 \\ & 2d-C & 6.36E-1 & 6.08E-2 & 4.34E-2 & **1.43E-2** & 6.48E-1 & 2.76E-1 & 7.92E-1 & 2.86E-1 & 4.80E-1 & 8.71E-1 & 2.90E-2 \\ & 3d-C & 3.04E-0 & 3.66E-0 & 3.67E-0 & 3.12E-0 & 2.96E-0 & 3.64E-0 & 2.35E+0 & 3.08E+0 & 3.07E+0 & 2.86E+0 & 3.30E+0 \\ & 2d-MS & 6.30E-1 & 7.60E-1 & 7.94E-1 & 7.48E-1 & 6.44E-1 & **5.90E-1** & 6.16E-1 & 9.72E-1 & 5.93E-1 & 9.31E-1 & 1.04E+0 \\ \hline Heat & 2d-VC & 1.01E+0 & 2.35E-1 & **2.12E-1** & 2.14E-1 & 9.66E-1 & 4.75E-1 & 2.12E+0 & 9.40E-1 & 6.42E-1 & 8.49E-1 & 9.52E-1 \\ & 2d-MS & 6.21E-2 & 2.42E-1 & 8.79E-2 & **4.40E-2** & 7.49E-2 & 2.18E-1 & 1.13E-1 & 9.30E-1 & 7.40E-2 & 9.85E-1 & 8.20E-2 \\ & 2d-CG & 3.64E-2 & 1.45E-1 & 1.25E-1 & 1.16E-1 & 2.72E-2 & 9.12E-2 & 9.38E-2 & – & **2.39E-2** & 4.61E-1 & 9.16E-2 \\ & 2d-LT & 9.99E-1 & 9.99E-1 & 9.99E-1 & 1.00E+0 & 9.99E-1 & 1.00E+0 & 1.00E+0 & 9.99E-1 & 9.99E-1 & 1.01E+0 \\ \hline NS & 2d-C & 7.40E-2 & 1.45E-1 & NN & 1.98E-1 & 4.69E-1 & 7.27E-1 & 7.70E-2 & 2.91E-1 & **3.60E-2** & 3.79E-2 & 8.45E-2 \\ & 2d-C & 1.19E-1 & 3.26E-1 & 3.32E-1 & 3.33E-1 & 3.34E-1 & 4.31E-1 & 1.54E-1 & 9.94E-1 & **3.42E-1** & **1.74E-1** & 8.27E+0 \\ & 2d-LT & 9.96E-1 & 1.00E+0 & 1.00E+0 & 9.99E-1 & 1.00E+0 & 1.00E+0 & 9.95E-1 & 1.73E+0 & 9.98E-1 & 9.99E-1 & 1.00E+0 \\ \hline Wave & 1d-C & 5.88E-1 & 2.85E-1 & 3.61E-1 & **9.79E-2** & 5.39E-1 & 1.21E-1 & 5.56E-1 & 8.39E-1 & 4.54E-1 & 6.77E-1 & 5.91E-1 \\ & 2d-CG & 1.84E+0 & 1.66E+0 & 1.48E+0 & 2.16E+0 & 1.15E+0 & 1.09E+0 & 8.14E-1 & 7.99E-1 & 8.19E-1 & **7.94E-1** & 1.06E+0 \\ & 2d-MS & 9.75E-1 & 9.66E-1 & 9.54E-1 & 9.68E-1 & 9.73E-1 & 9.33E-1 & 9.72E-1 & 2.10E+0 & 9.71E-1 & 1.08E+0 & 1.52E+0 \\ \hline Chaotic & GS & 3.19E-1 & 1.58E-1 & 9.37E-2 & 2.16E-1 & 9.46E-2 & 9.37E-2 & 2.48E-1 & 1.16E+0 & 9.47E-2 & 9.46E-2 & **7.99E-2** \\ & KS & 1.01E+0 & 9.86E-1 & 9.57E-1 & 9.64E-1 & 1.01E+0 & 9.61E-1 & 9.94E-1 & 9.72E-1 & 1.01E+0 & 1.00E+0 & 1.02E+0 \\ \hline High dim & PNN & 3.04E-3 & 2.58E-4 & **4.58E-4** & 4.64E-3 & 3.95E-3 & 3.98E-3 & 5.05E-3 & – & 4.14E-3 & 7.75E-3 & – \\ & HAV & 3.61E-1 & 4.59E-1 & 3.94E-1 & 3.97E-1 & 3.57E-1 & **3.02E-1** & 3.17E-1 & – & 5.22E-1 & 5.21E-1 & – \\ \hline Inverse & PINP & 9.42E-2 & 1.66E-1 & 1.54E-1 & 1.93E-1 & 9.38E-2 & 1.30E-1 & 8.03E-2 & **1.41E-2** & 1.30E-1 & 2.54E-1 & 8.44E-1 \\ & HInv & 1.57E+0 & 5.26E-2 & 5.09E-2 & 7.52E-2 & 1.52E+0 & 8.04E-2 & 4.84E+0 & **1.19E-2** & 5.59E-1 & 2.12E-1 & 9.27E-1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Mean L2RE of different PINN variants on our benchmark. We **bold** the best results across all methods. We do not bold any result if errors of all methods are about \(100\%\). “NaN” means the method does not converge and “–” means
optimizers, and better capacity through adaptive activation and domain decomposition. This holds true for the 2D cases of Heat and Poisson equations, which are classic linear equations. However, when systems have higher dimensions (Poisson3d-CG) or longer time spans (Heat2d-LT), all methods fail to solve, highlighting the difficulties associated with complex geometries and multi-scale systems.
In contrast, nonlinear, high-dimensional, and long-time problems are harder to solve. Nearly all methods struggle to solve the 2D Burgers equation, NS equation, and KS equation, which are typical nonlinear differential equations. Alongside the failures over longer time spans, we also observe a general inability to solve the Wave equation, which contains a second-order time derivative and typically displays periodic phenomena over time. While all methods achieve high precision on Poisson-Nd, none are able to solve Heat-Nd. This suggests that systems in very high dimensions continue to pose unsolved challenges. These difficulties represent open problems for future research.
### Hyperparameter Analysis
The performance of PINNs is strongly infected by hyperparameters, with each variant potentially introducing its own unique set. Here we aim to investigate the impact of several shared and method-specific hyperparameters via ablation studies. We consider the batch size and the number of training epochs. The corresponding results are depicted in Figure 2. We focus on a set of problems, i.e., Burgers1d, GS, Heat2d-CG, and Poisson2d-C. Detailed numerical results and additional findings can be found in Appendix E.2.
**Batch Size.** The left figure shows the testing L2RE of PINNs for varying batch sizes. It suggests that larger batch sizes tend to provide superior results, possibly due to improved gradient estimation accuracy. Despite the GS and Poisson2d-C witnessing saturation beyond a batch size of 2048, amplifying the number of collocation points generally enhances results for the remaining problems.
**Training Epochs.** The right figure illustrates the L2RE of PINNs across varying training epochs. Mirroring the trend observed with batch size, we find that an increase in the number of epochs leads to a decrease in overall error. However, the impact of augmenting the number of epochs also has a saturation point. Notably, beyond 20k or 80k epochs, the error does not decrease significantly, suggesting convergence of PINNs around 20\(\sim\) 80 epochs.
**Learning Rates.** The performance of standard PINNs under various learning rates and learning rate schedules is depicted in Figure 3. We observe that the influence of the learning rate on performance is intricate, with optimal learning rates varying across problems. Furthermore, PINN training tends to be unstable. High learning rates, such as \(10^{-2}\), often lead to error spikes, while low learning rates, like \(10^{-5}\), result in sluggish convergence. Our findings suggest that a moderate learning rate, such as \(10^{-3}\) or \(10^{-4}\), or a step decay learning rate schedule, tends to yield more stable performance.
Moreover, each method might introduce its method-specific hyperparameters. Here we choose several representative examples to show their influence, like momentum for PINN-LRA, weights for gPINN, and so on. Due to limited space, we list results in Appendix E.2.
Figure 2: Performance of vanilla PINNs under different batch size (number of collocation points) which is shown in the left figure, and number of training epochs which is shown in the right figure.
## 5 Conclusion and Discussion
In this work, we introduced PINNacle, a comprehensive benchmark offering a user-friendly toolbox that encompasses over 10 PINN methods. We evaluated these methods against more than 20 challenging PDE problems, conducting an extensive set of experiments and an ablation study featuring various hyperparameters. Looking forward, we plan to expand the benchmark by integrating additional state-of-the-art methods and incorporating more practical problem scenarios.
Our rigorous analysis of the experimental results yields several key insights. Firstly, certain methods such as domain decomposition and reweighting are beneficial for addressing problems characterized by complex geometries and multi-scale features. Secondly, the choice of hyperparameters is crucial to the performance of PINNs. Selecting a larger batch size and appropriately weighting losses may significantly reduce the error. Thirdly, we identify high-dimensional and nonlinear problems as a pressing challenge. The overall performance of PINNs is not yet on par with traditional numerical methods. Fourthly, from a theoretical standpoint, the exploration of PINNs' loss landscape during the training of high-dimensional or nonlinear problems remains a largely uncharted area of research. Lastly, from an algorithmic perspective, we propose that integrating the strengths of neural networks with numerical methods may present a promising avenue toward overcoming the challenges identified herein.
|
2309.03107 | Solving multiscale elliptic problems by sparse radial basis function
neural networks | Machine learning has been successfully applied to various fields of
scientific computing in recent years. In this work, we propose a sparse radial
basis function neural network method to solve elliptic partial differential
equations (PDEs) with multiscale coefficients. Inspired by the deep mixed
residual method, we rewrite the second-order problem into a first-order system
and employ multiple radial basis function neural networks (RBFNNs) to
approximate unknown functions in the system. To aviod the overfitting due to
the simplicity of RBFNN, an additional regularization is introduced in the loss
function. Thus the loss function contains two parts: the $L_2$ loss for the
residual of the first-order system and boundary conditions, and the $\ell_1$
regularization term for the weights of radial basis functions (RBFs). An
algorithm for optimizing the specific loss function is introduced to accelerate
the training process. The accuracy and effectiveness of the proposed method are
demonstrated through a collection of multiscale problems with scale separation,
discontinuity and multiple scales from one to three dimensions. Notably, the
$\ell_1$ regularization can achieve the goal of representing the solution by
fewer RBFs. As a consequence, the total number of RBFs scales like
$\mathcal{O}(\varepsilon^{-n\tau})$, where $\varepsilon$ is the smallest scale,
$n$ is the dimensionality, and $\tau$ is typically smaller than $1$. It is
worth mentioning that the proposed method not only has the numerical
convergence and thus provides a reliable numerical solution in three dimensions
when a classical method is typically not affordable, but also outperforms most
other available machine learning methods in terms of accuracy and robustness. | Zhiwen Wang, Minxin Chen, Jingrun Chen | 2023-09-01T15:11:34Z | http://arxiv.org/abs/2309.03107v1 | # Solving multiscale elliptic problems by sparse radial basis function neural networks
###### Abstract
Machine learning has been successfully applied to various fields of scientific computing in recent years. For problems with multiscale features such as flows in porous media and mechanical properties of composite materials, however, it remains difficult for machine learning methods to resolve such multiscale features, especially when the small scale information is present. In this work, we propose a sparse radial basis function neural network method to solve elliptic partial differential equations (PDEs) with multiscale coefficients. Inspired by the deep mixed residual method, we rewrite the second-order problem into a first-order system and employ multiple radial basis function neural networks (RBFNNs) to approximate unknown functions in the system. To avoid the overfitting due to the simplicity of RBFNN, an additional regularization is introduced in the loss function. Thus the loss function contains two parts: the \(L_{2}\) loss for the residual of the first-order system and boundary conditions, and the \(\ell_{1}\) regularization term for the weights of radial basis functions (RBFs). An algorithm for optimizing the specific loss function is introduced to accelerate the training process. The accuracy and effectiveness of the proposed method are demonstrated through a collection of multiscale problems with scale separation, discontinuity and multiple scales from one to three dimensions. Notably, the \(\ell_{1}\) regularization can achieve the goal of representing the solution by fewer RBFs. As a consequence, the total number of RBFs scales like \(\mathcal{O}(\varepsilon^{-nT})\), where \(\varepsilon\) is the smallest scale, \(n\) is the dimensionality, and \(\tau\) is typically smaller than 1. It is worth mentioning that the proposed method not only has the numerical convergence and thus provides a reliable numerical solution in three dimensions when a classical method is typically not affordable, but also outperforms most other available machine learning methods in terms of accuracy and robustness.
Keywords:Multiscale elliptic problem, Radial basis function neural network, multi-scale, \(\ell_{1}\) regularization. +
Footnote †: journal:
[email protected]
[email protected]
[email protected]
(Z. Wang), [email protected]
(M. Chen), [email protected]
Introduction
Multiscale phenomena are common in nature and mathematical modeling typically involves a small parameter \(\varepsilon\) which characterizes the ratio between the smallest characteristic length over the size of interest [6]. In some scenarios, the mechanical response of composite materials and flow property of porous media can be modeled by the elliptic problems with multiscale coefficients
\[\begin{cases}-\operatorname{div}(a^{\varepsilon}(\mathbf{x})\nabla u^{ \varepsilon}(\mathbf{x}))=f(\mathbf{x})&\mathbf{x}\in\Omega,\\ u^{\varepsilon}(\mathbf{x})=g(\mathbf{x})&\mathbf{x}\in\partial\Omega,\end{cases} \tag{1.1}\]
where \(\Omega\subset\mathbb{R}^{n}\),\(n=1,2,3\) is a bounded domain, \(\varepsilon\ll 1\), and \(f:\Omega\to\mathbb{R}\) is the source term.
For small \(\varepsilon\), classical numerical methods cannot solve (1.1) in an effective manner, where the number of unknowns scales like \(\mathcal{O}(\varepsilon^{-n\tau})\) and \(\tau>1\). Meanwhile, by the homogenization theory [3, 26], as \(\varepsilon\to 0\), an effective model can be obtained from (1.1) based on the periodicity assumption of \(a^{\varepsilon}(\mathbf{x})\). Over the past a couple of decades, a large number of multiscale methods have been developed for multiscale elliptic problems in the literature, such as the multiscale finite element method (MsFEM) [9, 11], wavelet homogenization techniques [5], the heterogeneous multiscale method (HMM) [7, 8], the metric-based upscaling [19], and the local orthogonal decomposition (LOD) [16]. Typically, these methods solve multiple local problems in advance to extract the macroscopic information and then solve one global problem. The total computational cost is at least \(\mathcal{O}(\varepsilon^{-n})\).
In recent years, a series of deep neural network methods for solving PDEs are derived. Deep Ritz method (DRM) [27] employs the variational formulation as the loss function. Deep Galerkin method (DGM) [24] and Physics-informed neural networks (PINNs) [22] utilize the residual of the PDE as the loss function. Weak Adversarial Network (WAN) [28] employs the weak formulation to construct the loss function. Multilayer perception (MLP) and residual neural network (ResNet) are commonly used to approximate the solution. For multiscale problems, DNN-based methods are also developed. Multiscale deep neural network (MscaleDNN) [14] approximates the solution by converting the original data to a low frequency space and [13] improves the MscaleDNN algorithm by a smooth and localized activation function.
Radial basis function neural network (RBFNN) represents an attractive alternative to other neural network models [18]. One reason is that it forms a unified connection between function approximation, regularization, noisy interpolation, classification, and density estimation. It is also the case that training a RBFNN is faster than training MLP networks [18]. RBFNN is a simple three-layer feedforward neural network. The first layer is the input layer, representing the input features. The second layer is the hidden layer, composed of radial basis activation functions. The third layer is the output layer. Fig. 1 shows the network architecture of RBFNN in four dimensions with five neurons. Like most neural networks, RBFNN also has the universal approximation property [20]. Meanwhile, compared with deep neural networks, RBFNN avoids the tedious
and lengthy calculation of back propagation between the input layer and the output layer, significantly accelerating the training process of the network.
Often, optimizing the empirical risk leads to the overfitting problem owing to the simplicity of RBFNN, and causes a poor generalization capability. To tackle this issue, \(\ell_{1}\) regularization techniques [21, 23], become common components in many active fields, such as sparse representations of images [2], molecular surface [10] and point cloud surface reconstruction [25]. The basic idea is to add a regularization term to the loss function to penalize over-complicated solutions. For \(\ell_{1}\) regularization, a weighted \(\ell_{1}\) norm of the parameter vector is added to the loss function, which penalizes the sum of the absolute values of the parameters.
In this work, we propose a sparse radial basis function neural network (SRBFNN) method for solving multiscale elliptic equations, where RBFNN is used to approximate the multiscale solution and the \(\ell_{1}\) regularization term is added to the loss function to guarantee the sparsity of RBF representation. In the modeling stage, by rewriting the second-order equation into a first-order system as shown in the deep mixed residual method [15], we obtain an augmented problem in the sense that both the solution and the auxiliary variables related to first-order derivatives of the solution are unknown functions to be approximated. An algorithm for optimizing the specific loss function is introduced to accelerate the training process. The accuracy and effectiveness of the proposed method are demonstrated through a collection of multiscale problems with scale separation, discontinuity and multiple scales from one to three dimensions. Notably, the \(\ell_{1}\) regularization can achieve the goal of representing the solution by fewer RBFs. As a consequence, the total number of RBFs scales like \(\mathcal{O}(\varepsilon^{-n\tau})\), where \(\varepsilon\) is the smallest scale, \(n\) is the dimensionality, and \(\tau\) is typically smaller than 1.
The paper is organized as follows. Section 2 introduces the SRBFNN. An algorithm is proposed to optimize the loss function with the \(\ell_{1}\) regularization during the training pro
Figure 1: The network architecture of radial basis function neural network.
cess in Section 3. Section 4 provides numerical experiments from one to three dimensions to demonstrate the accuracy and effectiveness of the proposed method. Conclusions and discussions are drawn in Section 5.
## 2 Sparse radial basis function neural network
### Radial basis function
The function \(\Phi:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is called a radial basis function, which can be passed through a one-dimensional (1D) function \(\phi:[0,\infty)\rightarrow\mathbb{R}\), this means
\[\Phi(\mathbf{x})=\phi(\|\mathbf{x}\|)=\phi(r),\quad r:=\|\mathbf{x}\|,\]
where \(\|\cdot\|\) represents the \(L_{2}\) norm in \(\mathbb{R}^{n}\).
Some common types of RBFs:
* Gaussian function \[\Phi(r)=e^{-d^{2}r^{2}};\]
* Multiquadric function (MQ) \[\Phi(r)=\sqrt{1+d^{2}r^{2}};\]
* Inverse multiquadric function (IMQ) \[\Phi(r)=1/\sqrt{1+d^{2}r^{2}},\]
where \(d\) is the shape parameter that plays an important role in the approximation accuracy. As shown in [4], a small shape parameter results in an ill-conditioned system, while the corresponding RBF method exhibits the high approximation accuracy. In contrast, a large shape parameter results in an well-conditioned system, but the approximation accuracy is low.
In this work, we use the ellipse Gaussian RBFs
\[\Phi(\mathbf{x};\mathbf{c},\mathbf{D})=\mathrm{e}^{-\|\mathbf{D}(\mathbf{x}- \mathbf{c})\|}, \tag{2.1}\]
where \(\mathbf{D}=\mathrm{diag}(d_{1},...,d_{n})\) is the shape parameter and \(\mathbf{c}=(c_{1},...,c_{n})^{T}\) is the center. The ellipse Gaussian RBF has the good approximation property in a specific neighborhood of the center with different radius.
### Sparse radial basis function neural network
The RBFNN can be expressed as
\[\mathbf{NN}(\mathbf{x};\mathbf{w},\mathbf{c},\mathbf{D})=\sum_{i=1}^{N}w_{i}\Phi_{ i}(\mathbf{x};\mathbf{c}_{i},\mathbf{D}_{i}), \tag{2.2}\]
where \(\Phi_{i}(\mathbf{x};\mathbf{c}_{i},\mathbf{D}_{i}),i=1,...,N\) are the ellipse Gaussian RBFs and \(\mathbf{w}=(w_{1},..,w_{N})^{T}\) is the corresponding weight. For the ease of exposition, we denote the trainable parameters \(\mathbf{w},\mathbf{c},\mathbf{D}\) by \(\boldsymbol{\theta}\), i.e.,
\[\boldsymbol{\theta}=\{\mathbf{w}\in\mathbb{R}^{N},\mathbf{c}\in\mathbb{R}^{N \times n},\mathbf{D}\in\mathbb{R}^{N\times n}|N\text{ is the number of basis, }n\text{ is the dimension of the input }\mathbf{x}\}.\]
The RBFNN whose loss function contains the \(\ell_{1}\) regularization term for the weight vector \(\mathbf{w}\) is called the sparse RBFNN (SRBFNN).
### Loss function
We consider multiscale elliptic equations of the form (1.1). The total loss function for SRBFNN is defined as
\[Loss(\mathbf{x};\boldsymbol{\theta})=L_{s}(\mathbf{x};\boldsymbol{\theta})+ \lambda_{3}\|\mathbf{w}^{u}\|_{1}, \tag{2.3}\]
where \(L_{s}\) is the \(L_{2}\) loss function for the first-order system, \(\lambda_{3}\) is the penalty parameter, and \(\mathbf{w}^{u}\) is the weight of the RBFNN that approximates the solution. \(\|\cdot\|_{1}\) denotes the \(\ell_{1}\) norm. Explicit forms of \(L_{s}\) from one to three dimensions will be specified in (2.6), (2.9), and (2.12).
#### 2.3.1 \(L_{2}\) loss function in 1D
Inspired by the MIM method [15], we rewrite (1.1) in 1D, using an auxiliary variable \(p\), into a first-order system:
\[\begin{cases}p-a^{\varepsilon}u_{x}=0&x\in\Omega,\\ p_{x}+f=0&x\in\Omega,\\ u=g&x\in\partial\Omega.\end{cases} \tag{2.4}\]
Then we use two RBFNNs (\(\mathbf{NN}_{1}\), \(\mathbf{NN}_{2}\)) to approximate \(u\) and \(p\) as
\[\widetilde{u}=\mathbf{NN}_{1}(x;\boldsymbol{\theta}^{u}),\quad\widetilde{p}= \mathbf{NN}_{2}(x;\boldsymbol{\theta}^{p}), \tag{2.5}\]
where \(\boldsymbol{\theta}^{u}\) and \(\boldsymbol{\theta}^{p}\) are the neural network parameters used to approximate \(u\) and \(p\), respectively. Then the \(L_{2}\) loss function for the system (2.4) is defined by
\[L_{s}(\mathbf{x};\boldsymbol{\theta}^{u},\boldsymbol{\theta}^{p})=\|a^{ \varepsilon}\widetilde{u}_{x}-\widetilde{p}\|_{2,\Omega}^{2}+\lambda_{1}\| \widetilde{p}_{x}+f\|_{2,\Omega}^{2}+\lambda_{2}\|\widetilde{u}-g\|_{2,\partial \Omega}^{2}, \tag{2.6}\]
where \(\lambda_{1}\) and \(\lambda_{2}\) are penalty parameters given in advance. The three terms in (2.6) measure how well the approximate solution and the auxiliary variable satisfy the first-order system (2.4) and the boundary condition, respectively. Due to the simple form of the RBFNN defined in (2.2), the corresponding derivatives \(\widetilde{u}_{x}\), \(\widetilde{p}_{x}\) can be easily calculated.
#### 2.3.2 \(L_{2}\) loss function in 2D
Analogously, we rewrite (1.1) in 2D, using auxiliary variables \(p,q\), as
\[\left\{\begin{aligned} p-a^{\varepsilon}\cdot\frac{\partial u}{ \partial x}&=0&\mathbf{x}\in\Omega,\\ q-a^{\varepsilon}\cdot\frac{\partial u}{\partial y}& =0&\mathbf{x}\in\Omega,\\ \frac{\partial p}{\partial x}+\frac{\partial q}{\partial y}+f& =0&\mathbf{x}\in\Omega,\\ u&=g&\mathbf{x}\in\partial\Omega.\end{aligned}\right. \tag{2.7}\]
Then we use three RBFNNs (\(\mathbf{NN}_{1}\), \(\mathbf{NN}_{2}\), \(\mathbf{NN}_{3}\)) to approximate \(u,p,q\) as
\[\widetilde{u}=\mathbf{NN}_{1}(\mathbf{x};\boldsymbol{\theta}^{u}),\quad \widetilde{p}=\mathbf{NN}_{2}(\mathbf{x};\boldsymbol{\theta}^{p}),\quad \widetilde{q}=\mathbf{NN}_{3}(\mathbf{x};\boldsymbol{\theta}^{q}). \tag{2.8}\]
The \(L_{2}\) loss function for the system (2.7) is defined as
\[L_{s}(\mathbf{x};\boldsymbol{\theta}^{u},\boldsymbol{\theta}^{p},\boldsymbol{ \theta}^{q})=\|a^{\varepsilon}\cdot\frac{\partial\widetilde{u}}{\partial x} -\widetilde{p}\|_{2,\Omega}^{2}+\|a^{\varepsilon}\cdot\frac{\partial \widetilde{u}}{\partial y}-\widetilde{q}\|_{2,\Omega}^{2}+\lambda_{1}\|\frac{ \partial\widetilde{p}}{\partial x}+\frac{\partial\widetilde{q}}{\partial y}+ f\|_{2,\Omega}^{2}+\lambda_{2}\|\widetilde{u}-g\|_{2,\partial\Omega}^{2}. \tag{2.9}\]
#### 2.3.3 \(L_{2}\) loss function in 3D
We rewrite (1.1) in 3D, using auxiliary variables \(p,q,r\), as
\[\left\{\begin{aligned} p-a^{\varepsilon}\cdot\frac{ \partial u}{\partial x}&=0&\mathbf{x}\in\Omega,\\ q-a^{\varepsilon}\cdot\frac{\partial u}{\partial y}& =0&\mathbf{x}\in\Omega,\\ r-a^{\varepsilon}\cdot\frac{\partial u}{\partial z}& =0&\mathbf{x}\in\Omega,\\ \frac{\partial p}{\partial x}+\frac{\partial q}{\partial y}+ \frac{\partial r}{\partial z}+f&=0&\mathbf{x}\in \Omega,\\ u&=g&\mathbf{x}\in\partial\Omega.\end{aligned}\right. \tag{2.10}\]
Then we use four RBFNNs (\(\mathbf{NN}_{1}\), \(\mathbf{NN}_{2}\), \(\mathbf{NN}_{3}\), \(\mathbf{NN}_{4}\)) to approximate \(u,p,q,r\) as
\[\widetilde{u}=\mathbf{NN}_{1}(\mathbf{x};\boldsymbol{\theta}^{u}),\quad \widetilde{p}=\mathbf{NN}_{2}(\mathbf{x};\boldsymbol{\theta}^{p}),\quad \widetilde{q}=\mathbf{NN}_{3}(\mathbf{x};\boldsymbol{\theta}^{q}),\quad \widetilde{r}=\mathbf{NN}_{4}(\mathbf{x};\boldsymbol{\theta}^{r}). \tag{2.11}\]
The \(L_{2}\) loss function for the system (2.10) is
\[L_{s}(\mathbf{x};\boldsymbol{\theta}^{u},\boldsymbol{\theta}^{p}, \boldsymbol{\theta}^{q},\boldsymbol{\theta}^{r})= \|a^{\varepsilon}\cdot\frac{\partial\widetilde{u}}{\partial x}- \widetilde{p}\|_{2,\Omega}^{2}+\|a^{\varepsilon}\cdot\frac{\partial \widetilde{u}}{\partial y}-\widetilde{q}\|_{2,\Omega}^{2}+\|a^{\varepsilon} \cdot\frac{\partial\widetilde{u}}{\partial z}-\widetilde{r}\|_{2,\Omega}^{2} \alpha^{+} \tag{2.12}\] \[\lambda_{1}\|\frac{\partial\widetilde{p}}{\partial x}+\frac{ \partial\widetilde{q}}{\partial y}+\frac{\partial\widetilde{r}}{\partial z}+ f\|_{2,\Omega}^{2}+\lambda_{2}\|\widetilde{u}-g\|_{2,\partial\Omega}^{2}.\]
### Network initialization
All RBFNNs use the same initialization strategy for the parameters \(\mathbf{w}\), \(\mathbf{c}\), \(\mathbf{D}\). The center \(\mathbf{c}\) is initialized with a uniform distribution over \(\Omega\). For the weight \(\mathbf{w}\), we initialize them with a uniform distribution U(0,1). The initialization of the shape parameter \(\mathbf{D}\) is extremely important for the network convergence in multiscale problems. According to the homogenization theory [3], the solution \(u^{\varepsilon}\) to (1.1) admits the following asymptotic expansion:
\[u^{\varepsilon}(\mathbf{x})=u_{0}(\mathbf{x})+\varepsilon u_{1}(\mathbf{x}, \frac{\mathbf{x}}{\varepsilon})+\varepsilon^{2}u_{2}(\mathbf{x},\frac{ \mathbf{x}}{\varepsilon})+\cdots, \tag{2.13}\]
where \(u_{0}\) is the homogenized solution, and \(u_{1}\), \(u_{2}\),... are high-order terms and depend on the derivatives of \(u_{0}\). Therefore, to resolve the smallest scale of the solution, we initialize \(\mathbf{D}\) with uniform distribution U(0, \(\frac{1}{\varepsilon}\)).
## 3 Training process of SRBFNN
In the training process of SRBFNN, we aim to find a sparse weight vector, \(\mathbf{w}^{u}\), while \(\mathbf{NN}_{i}\),\(i\)=1,...,\(n\)+1 are also good fits of the solution and the auxiliary variables. Therefore, an algorithm to optimize the loss function with \(\ell_{1}\) regularization during the training phase is proposed in Algorithm 1.
```
Input : The coefficient \(a^{e}\), the source term \(f\), and the boundary condition \(g\) in (1.1). Output : The list of parameters for \(\mathbf{NN}_{i},i=1,...,n+1\). Step 1. initialize \(\mathbf{NN}_{i}\), \(i=1,...,n+1\), \(n\) is the dimension; Step 2. generate two sets of points uniformly distributed over \(\Omega\) and \(\partial\Omega\): \(\left\{\mathbf{x}_{m}\right\}_{m=1}^{M_{r}}\) in \(\Omega\) and \(\left\{\widehat{\mathbf{x}}_{m}\right\}_{m=1}^{M_{b}}\) on \(\partial\Omega\), \(M_{r}\) and \(M_{b}\) are the number of sample points; Step 3. initialize some hyperparameters; Step 3.1. set the penalty coefficients of the loss functions, i.e., \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\); Step 3.2. set the maximum iteration \(MaxNiter\) and the number of sparse optimization iterations \(SparseNiter\). Set \(Check_{iter}\) for deleting the neglectable weight every \(Check_{iter}\) iterations; Step 3.3. initialize the threshold, \(thres=0.0\), for adding \(\ell_{1}\) regularization and the recorded loss, \(L^{rec}=0.0\), for recording latest \(L_{2}\) loss per \(Check_{iter}\) iterations. Initialize the learning rate; Step 3.4. set the tolerance, \(tol_{1}\), to determine whether \(L_{2}\) loss is convergent and the tolerance, \(tol_{2}\), for neglectable weight in \(\mathbf{NN}_{1}\). Set \(batchsize\), \(Niter=0\) and \(j=0\); Step 4. start following training process; while\(Niter\leq MaxNiter\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do Step 4.1. calculate the \(L_{2}\) loss function, \(L_{s}\) (in 2.6, 2.9, 2.12), with \(batchsize\) sample points; Step 4.2. if\(L_{s}>thres\)or\(Niter>SparseNiter\)then \(\lambda_{3}=0\); else \(\lambda_{3}>0\); end if Step 4.3. update the network parameters by ADAM optimizer; end for if\(Niter\%Check_{iter}==0\)then Step 4.4. if\(|L^{rec}-L_{s}|<tol_{1}\)and\(j==0\)then set \(thres=L_{s}+tol_{1}\); \(j+=1\); end if Step 4.5. if\(j>0\)and\(Niter<SparseNiter\)then delete neglectable basis function, whose coefficient is close to zero, i.e., \(|w_{i}|<tol_{2}\); end if Step 4.6. record the \(L_{2}\) loss as the \(L^{rec}\) every \(Check_{iter}\) iterations, i.e., \(L^{rec}=L_{s}\) ; end for end if
```
**Algorithm 1**Training process of the SRBFNN
``` Input : The coefficient \(a^{e}\), the source term \(f\), and the boundary condition \(g\) in (1.1). Output : The list of parameters for \(\mathbf{NN}_{i},i=1,...,n+1\). Step 1. initialize \(\mathbf{NN}_{i}\), \(i=1,...,n+1\), \(n\) is the dimension; Step 2. generate two sets of points uniformly distributed over \(\Omega\) and \(\partial\Omega\): \(\left\{\mathbf{x}_{m}\right\}_{m=1}^{M_{r}}\) in \(\Omega\) and \(\left\{\widehat{\mathbf{x}}_{m}\right\}_{m=1}^{M_{b}}\) on \(\partial\Omega\), \(M_{r}\) and \(M_{b}\) are the number of sample points; Step 3. initialize some hyperparameters; Step 3.1. set the penalty coefficients of the loss functions, i.e., \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\); Step 3.2. set the maximum iteration \(MaxNiter\) and the number of sparse optimization iterations \(SparseNiter\). Set \(Check_{iter}\) for deleting the neglectable weight every \(Check_{iter}\) iterations; Step 3.3. initialize the threshold, \(thres=0.0\), for adding \(\ell_{1}\) regularization and the recorded loss, \(L^{rec}=0.0\), for recording latest \(L_{2}\) loss per \(Check_{iter}\) iterations. Initialize the learning rate; Step 3.4. set the tolerance, \(tol_{1}\), to determine whether \(L_{2}\) loss is convergent and the tolerance, \(tol_{2}\), for neglectable weight in \(\mathbf{NN}_{1}\). Set \(batchsize\), \(Niter=0\) and \(j=0\); Step 4. start following training process; while\(Niter\leq MaxNiter\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+=batchsize\)do \(Niter+=1\); for\(i=0,i\leq M_{r},i+batchsize\)do \(Niter+=1\
Step 1 initializes the parameters of SRBFNN. Step 2 generates two sets of training points \(\{\mathbf{x}_{m}\}_{m=1}^{M_{r}}\) and \(\{\widetilde{\mathbf{x}}_{m}\}_{m=1}^{M_{b}}\) uniformly distributed over \(\Omega\) and \(\partial\Omega\), respectively. Step 3.1 sets the penalty coefficients \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\). Step 3.2 sets the number of maximum iterations and the number of sparse optimization iterations, i.e., the max number of iterations with the loss function having the \(\ell_{1}\) regularization term. Step 3.3 sets a threshold to the \(L_{2}\) loss, which denotes the sign for adding the \(\ell_{1}\) regularization, i.e., setting \(\lambda_{3}\!>\!0\). Step 3.4 sets a tolerance for the stability condition of \(L_{2}\) loss. Step 4.1 calculates the \(L_{2}\) loss with \(batchsize\) sample points. When the \(L_{s}\) is lower than \(thres\) and \(Niter\!<\!SparseNiter\), we add the \(\ell_{1}\) regularization in step 4.2. Step 4.3 selects the ADAM optimizer [12] to update the parameters and its pipeline is as follows
\[m_{k} =\!\beta_{1}\!\cdot\!m_{k-1}\!+\!(1\!-\!\beta_{1})\!\cdot\! \nabla Loss_{k},\] \[v_{k} =\!\beta_{2}\!\cdot\!v_{k-1}\!+\!(1\!-\!\beta_{2})\!\cdot\! \nabla(Loss_{k})^{2},\] \[\boldsymbol{\theta}_{k} =\!\boldsymbol{\theta}_{k-1}\!-\!lr\!\times\!\frac{\frac{m_{k}}{ 1-\beta_{1}^{k}}}{\sqrt{\frac{v_{k}}{1-\beta_{2}^{k}}}\!+\!\epsilon},\]
where \(lr\) is the learning rate. \(\beta_{1}\), \(\beta_{2}\) and \(\epsilon\) are set as default values (\(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.999 and \(\epsilon\) = \(10^{-8}\)), \(m_{k}\) is the \(k\)th biased first moment estimate (\(m_{0}\) = 0). \(v_{k}\) is the \(k\)th biased second raw estimate (\(v_{0}\) = 0). Step 4.4 determines whether the difference between the \(L_{2}\) loss at current iteration and the recorded loss is less than the tolerance, \(tol_{1}\). If it does, \(L_{2}\) loss is convergent and we let \(thres\) equal to the \(L_{2}\) loss of the current iteration plus \(tol_{1}\). Step 4.5 determines whether \(L_{2}\) loss is convergent and \(Niter\!<\!SparseNiter\). If it does, we delete the neglectable basis function in \(\boldsymbol{\theta}^{\boldsymbol{u}}\), whose coefficient is close to 0. Step 4.6 records the value of \(L_{2}\) loss per \(Check_{iter}\) iterations.
## 4 Numerical experiments
In this section, we present several numerical examples to illustrate the accuracy and effectiveness of our method. The coefficients are periodic functions in Example 1 and Example 5, two-scale problems with scale separation in Example 2, Example 3, and Example 6, a discontinuous periodic function in Example 4, a multiscale case in Example 7, respectively. Example 8 is a three dimensional example with the periodic coefficient.
We shall use the following three relative errors to measure the approximation accuracy.
\[err_{2}\!=\!\frac{\|u^{S}\!-\!u^{F}\|_{L_{2}(\Omega)}}{\|u^{F}\|_{L_{2}( \Omega)}},\quad err_{\infty}\!=\!\frac{\|u^{S}\!-\!u^{F}\|_{L_{\infty}(\Omega )}}{\|u^{F}\|_{L_{\infty}(\Omega)}},\quad err_{H_{1}}\!=\!\frac{\|u^{S}\!-\!u^ {F}\|_{H_{1}(\Omega)}}{\|u^{F}\|_{H_{1}(\Omega)}}, \tag{4.1}\]
where \(u^{S}\) is the approximate solution of SRBFNN and \(u^{F}\) is the reference solution of finite difference method (FDM) with a very fine mesh size. The reference solution is not
available in 3D since FDM is too expensive to solve a multiscale problem. \(\|\cdot\|_{L_{2}(\Omega)}\) is the \(L_{2}\) norm, \(\|\cdot\|_{L_{\infty}(\Omega)}\) is the \(L_{\infty}\) norm and \(\|\cdot\|_{H_{1}(\Omega)}\) is defined as
\[\|u^{S}\|_{H_{1}(\Omega)}=\sqrt{\|u^{S}\|^{2}_{L_{2}(\Omega)}+\| \nabla u^{S}\|^{2}_{L_{2}(\Omega)}}. \tag{4.2}\]
where \(\nabla u^{S}\) is calculated by the auxiliary variables, i.e., \(p/a^{\varepsilon}\) and \(q/a^{\varepsilon}\).
### Numerical experiments in 1D
**Setting:** In all one-dimensional cases, unless otherwise specified, the initial learning rate is set to be 0.1 and reduced by 1/10 every 300 iterations until it is less than \(10^{-5}\). We sample 10000 training points in the domain with a uniform distribution. The mesh size of FDM is set to 0.0001. \(batchsize\) is 2048, \(MaxNiter\) is 3000, \(SparseNiter\) is 2000, \(tol_{1}\) is \(10^{-3}\), \(tol_{2}\) is \(10^{-5}\), \(Check_{iter}\) is 100. The hyperparameters \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) are initialized to 1.0, 100.0 and 0.001, respectively.
We use six different values, \(\varepsilon=0.5\),0.1,0.05,0.01,0.005,0.002, to check the accuracy of our method. The initial number of RBFs is set to 100, 200, 300, 500, 1000, 1500, respectively. The computational domain is the unit interval. \(f(x)\) and \(g(x)\) in (1.1) are set to 1.0 in all 1D cases. All experiments are conducted on the GPU device RTX 2080Ti.
Example 1Consider the case that \(a^{\varepsilon}(x)\) is a periodic function
\[a^{\varepsilon}(x)=2+\sin(\frac{2\pi x}{\varepsilon}). \tag{4.3}\]
Fig. 2 plots the solution and its derivative obtained by SRBFNN and FDM when \(\varepsilon=0.01\), 0.002, respectively. One can see that SRBFNN has a good approximation for both the solution and the derivative.
Table 1 shows three relative errors and the number of basis functions in the final solution from SRBFNN after the training process.
From Table 1, one general observation is that the number of RBFs needed in the final solution increases as the \(\varepsilon\) decreases. In addition, the relative \(L_{2}\) and \(L_{\infty}\) errors from
\begin{table}
\begin{tabular}{c c c c c} \hline \(\varepsilon\) & \(N\) & \(err_{2}\) & \(err_{\infty}\) & \(err_{H_{1}}\) \\ \hline
0.5 & **17** & 3.100e-5 & 8.552e-5 & 1.987e-4 \\ \hline
0.1 & 38 & 6.718e-5 & 2.348e-4 & 8.560e-4 \\
0.05 & **66** & **8.311e-5** & **3.291e-4** & **7.847e-4** \\ \hline
0.01 & 186 & 1.086e-4 & 4.052e-4 & 8.433e-4 \\ \hline
0.005 & **367** & **7.481e-5** & **4.107e-4** & **1.017e-3** \\
0.002 & 750 & 2.541e-4 & 3.940e-4 & 1.899e-3 \\ \hline \end{tabular}
\end{table}
Table 1: Results of SRBFNN for Example 1. The second column records the number of basis functions in the final solution and the last three columns show the relative \(L_{2}\), \(L_{\infty}\) and \(H_{1}\) errors.
Figure 2: Numerical solution and its derivative obtained by SRBFNN and FDM when \(\varepsilon\)=0.01, 0.002 for Example 1.
SRBFNN are of \(10^{-4}\) for all scales and the relative \(H_{1}\) error is about \(10^{-3}\).
Example 2In this case, we consider the case that \(a^{\varepsilon}(x)\) is a two-scale function with scale separation
\[a^{\varepsilon}(x)=2+\sin(\frac{2\pi x}{\varepsilon})\cos(2\pi x). \tag{4.4}\]
We conduct a detailed comparison between SRBFNN and some deep neural network methods, including PINN [22], DGM [24], DRM [27] and MscaleDNN [13]. The training and test points are the same as those used in SRBFNN. PINN has five linear layers. DGM and DRM contain two residual blocks. The scale vector for MscaleDNN is [1, 2,..., \(\frac{1}{\varepsilon}\)]. The hidden layers are 16, 16, 32, 32, 64, 64 when \(\varepsilon=\)0.5, 0.1, 0.05, 0.01, 0.005, 0.002 for PINN, DGM and DRM. The hidden layers of MscaleDNN are (1000, 200, 150, 150, 100, 50, 50) for all \(\varepsilon\). \(tanh\) is used as the activation function for DRM, DGM and PINN. MscaleDNN employs a smooth and localized activation function \(s2ReLU\),
\[s2ReLU(x)=\sin(2\pi x)*ReLU(x)*ReLU(1-x).\]
Table 2 shows the results of relative \(L_{2}\) and \(H_{1}\) errors for 5 methods at different scales.
It is observed from Table 2 that more accurate results can be obtained by SRBFNN and MscaleDNN. This may be explained by the fact that SRBFNN and MscaleDNN use the activation functions with good local approximation capability, which is an essential factor for multiscale problems. Next, we compare the number of parameters in final solution for all methods and the average time per iteration, i.e., \(\dfrac{\text{Total running time}}{\text{Iterations}}\) in Table 3. It is found that SRBFNN contains fewer parameters and costs less time per iteration than other methods. This can be explained by the fact that SRBFNN has a simple network structure.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c c} \hline & & \multicolumn{4}{c|}{\(err_{2}\)} & \multicolumn{4}{c}{\(err_{H_{1}}\)} \\ \(\varepsilon\) & SRBFNN & MscaleDNN & DRM & DGM & PINN & SRBFNN & MscaleDNN & DRM & DGM & PINN \\ \hline
0.5 & 6.602e-6 & 2.531e-4 & 4.040e-2 & 1.697e-3 & 1.957e-3 & 1.972e-4 & 3.983e-3 & 1.437e-1 & 2.192e-2 & 2.122e-2 \\ \hline
0.1 & 3.59e-5 & 3.235e-4 & 5.237e-2 & 6.200e-3 & 6.168e-3 & 2.058e-4 & 1.111e-2 & 1.746e-1 & 4.876e-2 & 4.869e-2 \\ \hline
0.05 & 4.774e-5 & 2.743e-3 & 2.874e-2 & 4.466e-3 & 6.262e-3 & 3.368e-4 & 4.177e-2 & 1.020e-1 & 4.895e-2 & 4.932e-2 \\ \hline
0.01 & 2.01e-4 & 3.525e-3 & 2.874e-2 & 5.414e-3 & 5.549e-3 & 2.801e-3 & 4.694e-2 & 1.021e-1 & 4.924e-2 & 4.937e-2 \\ \hline
0.005 & 7.424e-5 & 3.695e-3 & 2.656e-2 & 7.424e-3 & 6.347e-3 & 1.282e-3 & 4.726e-2 & 9.597e-2 & 4.968e-2 & 4.935e-2 \\ \hline
0.002 & 1.654e-4 & 3.424e-3 & 2.661e-2 & 6.077e-3 & 5.511e-3 & 2.125e-3 & 4.711e-2 & 9.598e-2 & 4.914e-2 & 4.912e-2 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of SRBFNN, MscaleDNN, DRM, DGM, and PINN in terms of relative \(L_{2}\) and \(H_{1}\) errors for Example 2. The penalty coefficients of the boundary condition are 20.0, 20.0, 200.0 and 200.0 for PINN, DGM, DRM and MscaleDNN, respectively. The maximum number of iterations is 3000 for PINN, DGM, DRM and MscaleDNN.
Example 3Consider another two-scale function
\[a^{\varepsilon}(x)=2+\sin(2\pi x+\frac{2\pi x}{\varepsilon}). \tag{4.5}\]
Fig. 3 records the training processes of three relative errors and the number of RBFs when \(\varepsilon=0.002\). According to Fig. 3, we can observe that three relative errors stabilize around the 500th iteration, but they are still large at this point. When the regularization term was added in the loss function at around the 1000th iteration, the three relative errors all decreased significantly, which means that the \(ell_{1}\) regularization term can improve the generalization performance of the network.
Table 4 records the results of three relative errors and the number of RBFs in the final solution for Example 3. It is recognized that our method works well for two-scale coefficients. The accuracy and the number of RBFs in the final solution are also similar to the first two examples.
\begin{table}
\begin{tabular}{c|c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Parameters} & \multicolumn{4}{c}{Average time per iteration (Seconds)} \\ \(\varepsilon\) & SRBFNN & MscaleDNN & DRM & DGM & PINN & SRBFNN & MscaleDNN & DRM & DGM & PINN \\ \hline
0.5 & 351 & 277251 & 1137 & 1137 & 865 & 0.122 & 0.714 & 0.143 & 0.877 & 0.792 \\ \hline
0.1 & 699 & 277751 & 1137 & 1137 & 865 & 0.140 & 0.746 & 0.142 & 0.872 & 0.754 \\ \hline
0.05 & 1062 & 277751 & 4321 & 4321 & 3265 & 0.147 & 0.764 & 0.147 & 0.843 & 0.796 \\ \hline
0.01 & 1977 & 277751 & 4321 & 4321 & 3265 & 0.144 & 0.793 & 0.139 & 0.861 & 0.777 \\ \hline
0.005 & 4107 & 277751 & 16833 & 16833 & 12673 & 0.149 & 0.825 & 0.171 & 0.799 & 0.837 \\ \hline
0.002 & 6474 & 277751 & 16833 & 16833 & 12673 & 0.150 & 0.734 & 0.176 & 0.841 & 0.835 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of SRBFNN, MscaleDNN, DRM, DGM, and PINN in terms of the number of parameters and the average time per iteration measured in seconds for Example 2.
Figure 3: Three relative errors and the number of RBFs in terms of the iteration number when \(\varepsilon=0.002\) in Example 3.
\begin{table}
\begin{tabular}{c c c c c} \hline \(\varepsilon\) & \(N\) & \(err_{2}\) & \(err_{\infty}\) & \(err_{H_{1}}\) \\ \hline
0.5 & 18 & 4.850e-5 & 1.336e-4 & 2.319e-4 \\ \hline
0.1 & 40 & 6.539e-5 & 3.710e-4 & 5.730e-4 \\ \hline
0.05 & 69 & 6.778e-5 & 4.674e-4 & 4.249e-4 \\ \hline
0.01 & 178 & 1.086e-4 & 3.675e-4 & 1.414e-3 \\ \hline
0.005 & 379 & 1.431e-4 & 2.798e-4 & 2.134e-3 \\ \hline
0.002 & 760 & 9.549e-5 & 3.698e-4 & 1.500e-3 \\ \hline \end{tabular}
\end{table}
Table 4: Relative \(L_{2}\), \(L_{\infty}\) and \(H_{1}\) errors and the number of basis functions in the final solution for Example 3.
The relative \(L_{2}\) and \(H_{1}\) errors for five methods are shown in Table 5. The network architectures for MscaleDNN, DRM, DGM and PINN have the same settings as those used in Example 2.
As depicted in Table 5, SRBFNN obtains more accurate results than MscaleDNN, DRM, DGM, and PINN. Again, in the same settings, all the errors in Example 4 are
Figure 4: Numerical solution and its derivative obtained by SRBFNN and FDM when \(\varepsilon\)=0.05, 0.002 for Example 4.
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c} \hline \hline & \multicolumn{6}{c|}{\(err_{2}\)} & \multicolumn{6}{c}{\(err_{h}\)} \\ \(\varepsilon\) & SRBFNN & MscaleDNN & DRM & DGM & PINN & SRBFNN & MscaleDNN & DRM & DGM & PINN \\ \hline
0.5 & 3.673e-4 & 1.241e-3 & 1.668e-2 & 1.544e-2 & 1.698e-2 & 6.391e-3 & 2.353e-2 & 9.213e-2 & 1.121e-1 & 1.152e-1 \\ \hline
0.1 & 1.342e-4 & 1.375e-2 & 3.060e-2 & 3.712e-2 & 3.871e-2 & 5.174e-3 & 8.739e-2 & 1.583e-1 & 1.739e-1 & 1.740e-1 \\ \hline
0.05 & 1.185e-4 & 2.396e-2 & 3.092e-2 & 3.771e-2 & 3.898e-2 & 7.461e-3 & 1.273e-1 & 1.599e-1 & **1.744e-1** & **1.746e-1** \\ \hline
0.01 & 3.359e-4 & 3.127e-2 & 3.108e-2 & 3.622e-2 & 3.811e-2 & 1.765e-2 & 1.575e-1 & 1.595e-1 & 1.735e-1 & 1.375e-1 \\ \hline
0.005 & 6.714e-4 & 3.189e-2 & 3.104e-2 & 3.839e-2 & 3.846e-2 & 2.458e-2 & 1.587e-1 & 1.858e-1 & 1.730e-1 & 1.731e-1 \\ \hline
0.002 & 2.218e-4 & 3.201e-2 & 3.107e-2 & 3.661e-2 & 3.794e-2 & 3.832e-2 & 1.560e-1 & 1.557e-1 & 1.698e-1 & 1.705e-1 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Relative \(L_{2}\) and \(H_{1}\) errors of SRBFNN, MscaleDNN, DRM, DGM, and PINN for Example 4. The penalty coefficients of the boundary condition and the maximum number of iterations are the same as Example 2 for MscaleDNN, DRM, DGM, and PINN.
larger than those in Example 2 and Example 3, while the discontinuity has less impact on SRBFNN. These results indicate that SRBFNN can handle multiscale problems with discontinuous coefficients.
Next, we plot the number of RBFs needed in the final solution and \(\varepsilon\) for Examples 1-4 in Fig. 5. It is observed that \(\ln(N)\) and \(\ln(\varepsilon^{-1})\) is approximatively linear and more RBFs are needed in Example 4 than in other examples. Table 6 estimates the slope by using the least squares method. From Table 6, we see that \(\ln(N)\sim 0.67\ln(\varepsilon^{-1})\), which implies \(N=O(\varepsilon^{0.67})\).
### Numerical experiments in 2D
**Setting:** In all two-dimensional cases, the initial learning rate is set to 0.1 and reduces by 1/10 every 40 iterations until it is less than \(10^{-5}\). The training data is equidistantly sampled with mesh size \(h\)=0.002. The mesh size of FDM is set to 0.0005. The \(batchsize\) is 1024 in the domain and 512\(\times\)4 on the boundary. \(MaxNiter=300\), \(SparseNiter=250\), \(tol_{1}=0.1\), \(tol_{2}=10^{-5}\), \(Check_{iter}=10\). The hyperparameters \(\lambda_{2}\) and \(\lambda_{3}\) are initialized to 20.0, 0.001 respectively. The solution is more oscillatory as \(\varepsilon\) decreases, and this makes the magnitude of derivatives larger. So to balance the effect of different loss terms, the
\begin{table}
\begin{tabular}{c c c c} \hline Example 1 & Example 2 & Example 3 & Example 4 \\ \hline
0.693 & 0.687 & 0.683 & 0.621 \\ \hline \end{tabular}
\end{table}
Table 6: The least squares estimation of slopes of \(\ln(N)\) with respect to \(\ln(\varepsilon^{-1})\) for Examples 1-4.
Figure 5: The number of basis functions needed in the final solution as a function of \(\varepsilon\) for Examples 1-4.
penalty coefficient \(\lambda_{1}\) is gradually reduced when \(\varepsilon\) goes to smaller as follows.
\[\lambda_{1}=\left\{\begin{array}{l}0.1,\,\mbox{if}\ \varepsilon=0.5,0.2,0.1\\ 0.02,\,\mbox{if}\ \varepsilon=0.05,0.02\\ 0.004,\,\mbox{if}\ \varepsilon=0.01\end{array}\right.. \tag{4.7}\]
Consider the problems in the domain \(\Omega=[0,1]^{2}\) with six scales, \(\varepsilon=\)0.5, 0.2, 0.1, 0.05, 0.02, 0.01 for all 2D examples. The initial number of RBFs is 1000, 1000, 2000, 5000, 15000, and 30000, respectively.
**Example 5**: \(a^{\varepsilon}(x,y)\) is defined by
\[a^{\varepsilon}(x,y)=2+\sin(\frac{2\pi(x+y)}{\varepsilon}), \tag{4.8}\]
where \(f(x,y)=-1.0\), \(g(x,y)=1.0\). Fig. 6 plots the training processes of three relative errors and the number of RBFs when \(\varepsilon=0.01\) for Example 5. One can see that SRBFNN achieves a good accuracy around the \(100th\) iteration and the number of basis functions no longer decreases at the \(250th\) iteration.
A detailed assessment of the predicted solution is presented in Fig. 7. In particular, it shows the absolute point-wise errors \(|u^{S}-u^{F}|\) at different scales. We can observe from Fig. 7 that the absolute point-wise errors in most areas are the order of O(\(10^{-3}\)), which means our method provides a good approximation for the solution. Then three relative errors and the number of basis functions in the final solution are recorded in Table 7. One can see that the number of basis functions grows much faster than that in 1D when \(\varepsilon\) becomes smaller. In addition, the accuracy of relative \(L_{2}\) and \(L_{\infty}\) errors is over one order of magnitude than that in 1D.
Figure 6: Training process of three relative errors and the number of RBFs when \(\varepsilon=0.01\) for Example 5.
Example 6[17]Consider \(a^{\varepsilon}(x,y)\) is a function with two scales
\[a^{\varepsilon}(x,y)=\frac{1.5+\sin(\frac{2\pi x}{\varepsilon})}{1.5+\sin( \frac{2\pi y}{\varepsilon})}+\frac{1.5+\sin(\frac{2\pi y}{\varepsilon})}{1.5+ \cos(\frac{2\pi x}{\varepsilon})}+\sin(4x^{2}y^{2})+1, \tag{4.9}\]
where \(f(x,y)=-10.0\), \(g(x,y)=0.0\). We conduct a comparison for SRBFNN with MscaleDNN, DRM, DGM, and PINN. The DRM and DGM contain four residual blocks. The PINN has nine linear layers. The hidden layers are 64, 64, 128, 128, 256, 256 when \(\varepsilon=0.5\), 0.2, 0.1, 0.05, 0.02, 0.01 for DRM, DGM and PINN. The hidden layers of MscaleDNN are (1000,400,300,300,200,100,100) for all \(\varepsilon\). \(batchsize\) is 1024 in the domain and 512\(\times\)4 on the
\begin{table}
\begin{tabular}{c c c c c} \hline \(\varepsilon\) & \(N\) & \(err_{2}\) & \(err_{\infty}\) & \(err_{H_{1}}\) \\ \hline
0.5 & 42 & 4.798e-4 & 1.596e-3 & 5.442e-3 \\ \hline
0.2 & 146 & 6.080e-4 & 1.328e-3 & 1.721e-2 \\
0.1 & 531 & **3.164e-4** & **1.453e-3** & **1.772e-2** \\
0.05 & 2121 & 3.561e-4 & 1.582e-3 & 2.684e-2 \\
0.02 & 6272 & **9.640e-4** & **1.862e-3** & **3.189e-2** \\
0.01 & 14164 & 7.719e-4 & 1.793e-3 & 2.070e-2 \\ \hline \end{tabular}
\end{table}
Table 7: Relative \(L_{2}\), \(L_{\infty}\) and \(H_{1}\) errors and the number of basis functions in the final solution for Example 5.
Figure 7: The absolute point-wise errors for different \(\varepsilon\) for Example 5.
boundary for these four methods. The initial learning rate is 0.001 and reduces by \(1/10\) every 100 iterations until it is less than \(10^{-5}\). We set the maximum number of iterations as 1000 for MsacleDNN, DRM, DGM, and PINN. Table 8 shows the relative \(L_{2}\) and \(H_{1}\) errors in terms of different scales for five methods.
As depicted in Table 8, SRBFNN reaches relatively lower errors than MscaleDNN, DRM, DGM, and PINN. PINN and DGM perform well when \(\varepsilon\) is big but the errors are relatively larger when \(\varepsilon\) is small. Additionally, one can observe that MscaleDNN and DRM outperform DGM and PINN, while MscaleDNN, DGM, PINN outperform DRM in one-dimensional cases. This observation is similar to the conclusion in [1]. Then we compare the number of parameters and the average time per iteration in all methods.
From Table 9, it is recognized that SRBFNN achieves better approximation than MscaleDNN, DRM, DGM, and PINN with fewer parameters. However, the average time per iteration for SRBFNN is larger than MscaleDNN, DRM, DGM, and PINN when \(\varepsilon\) is small, but SRBFNN requires fewer iterations to converge. Fig. 8 plots the final number of RBFs in terms of \(\varepsilon\). Analogously, we use the least squares method to estimate the slope of \(\ln(N)\) as a function of \(\ln(\varepsilon)\) for Example 5 and Example 6 in Table 10. One can observe from Fig. 8 and Table 10 that \(\ln(N)\sim 1.55\ln(\varepsilon^{-1})\), which implies \(N=O(\varepsilon^{-0.78*2})\). This observation, together with the results in aforementioned 1D examples can be written in a more general form: \(N=O(\varepsilon^{-\tau n})\), \(N\) is the number of RBFs in final solution, \(n=1\), \(2\) is the dimensionality and \(\tau\) is smaller than \(1\).
\begin{table}
\begin{tabular}{c|c c c c c|c c c c c} \hline \hline & & & \(err_{2}\) & & & & & \(err_{H_{1}}\) & & \\ \(\varepsilon\) & SRBFNN & MscaleDNN & DRM & DGM & PINN & SRBFNN & MscaleDNN & DRM & DGM & PINN \\ \hline
0.5 & 1.319e-3 & 7.885e-3 & 2.062e-2 & 6.615e-3 & 8.812e-3 & 1.276e-2 & 5.979e-2 & 1.074e-1 & 1.005e-1 & 1.048e-1 \\
0.2 & 2.116e-3 & 8.467e-3 & 2.280e-2 & 7.393e-2 & 1.658e-1 & 1.650e-2 & 9.922e-2 & 1.654e-1 & 1.182e-1 & 0.303e-1 \\ \hline
0.1 & 3.099e-3 & 8.229e-3 & 2.601e-2 & 7.531e-1 & 4.333e-1 & 3.499e-2 & 1.116e-1 & 2.071e-1 & 7.626e-1 & 7.164e-1 \\ \hline
0.05 & 3.150e-3 & 3.096e-2 & 2.681e-2 & 9.676e-1 & 9.133e-1 & 7.774e-2 & 2.164e-1 & 2.182e-1 & 9.466e-1 & 9.231e-1 \\ \hline
0.02 & 3.959e-3 & 2.066e-2 & 3.546e-2 & 9.966e-1 & 1.000e-0 & 9.972e-2 & 2.019e-1 & 2.021e-1 & 9.958e-1 & 9.966e-1 \\ \hline
0.01 & 2.481e-3 & 2.589e-2 & 3.667e-2 & 9.982e-1 & 1.009e-0 & 1.125e-1 & 2.078e-1 & 2.392e-1 & 9.993e-1 & 9.997e-1 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Relative \(L_{2}\) and \(H_{1}\) errors of SRBFNN, MscaleDNN, DRM, DGM, and PINN for Example 6. The penalty coefficient of the boundary condition for MscaleDNN and DRM is 500.0, and is 10.0 for PINN and DGM.
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline & & \multicolumn{4}{c}{Parameters} & \multicolumn{4}{c}{Average time per iteration (Seconds)} \\ \(\varepsilon\) & SRBFNN & MscaleDNN & DRM & DGM & PINN & SRBFNN & MscaleDNN & DRM & DGM & PINN \\ \hline
0.5 & 5240 & 704501 & 33537 & 33537 & 29377 & 11.102 & 9.731 & 6.948 & 11.725 & 10.087 \\ \hline
0.2 & 5760 & 704501 & 33537 & 33537 & 29377 & 10.747 & 10.315 & 6.722 & 11.796 & 10.111 \\ \hline
0.1 & 12690 & 704501 & 132609 & 132609 & 116097 & 10.889 & 9.885 & 6.722 & 11.821 & 10.233 \\ \hline
0.05 & 36375 & 704501 & 132609 & 132609 & 116097 & 15.310 & 9.713 & 6.751 & 11.783 & 10.185 \\ \hline
0.02 & 106810 & 704501 & 527361 & 527361 & 461569 & 37.869 & 9.821 & 6.827 & 11.704 & 10.032 \\ \hline
0.01 & 236425 & 704501 & 527361 & 527361 & 461569 & 74.965 & 8.231 & 6.756 & 11.942 & 10.080 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The network parameters and the average running time per iteration in different methods for Example 6.
Example 7 ([17])Consider \(a^{\varepsilon}(x,y)\) having six scales
\[a^{\varepsilon}(x,y) = \frac{1}{6}(\frac{1.1+\sin(2\pi x/\varepsilon_{1})}{1.1+\sin(2\pi y /\varepsilon_{1})}+\frac{1.1+\sin(2\pi y/\varepsilon_{2})}{1.1+\cos(2\pi x/ \varepsilon_{2})}+\frac{1.1+\cos(2\pi x/\varepsilon_{3})}{1.1+\sin(2\pi y/ \varepsilon_{3})}+\] \[\frac{1.1+\sin(2\pi y/\varepsilon_{4})}{1.1+\cos(2\pi x/ \varepsilon_{4})}+\frac{1.1+\cos(2\pi x/\varepsilon_{5})}{1.1+\sin(2\pi y/ \varepsilon_{5})}+\sin(4x^{2}y^{2})+1),\]
where \(f(x,y)=-10.0\), \(g(x,y)=0.0\), \(\varepsilon_{1}=1/5\),\(\varepsilon_{2}=1/13\),\(\varepsilon_{3}=1/17\),\(\varepsilon_{4}=1/31\),\(\varepsilon_{5}=1/65\). Particularly, the initial learning rate is set to \(0.1\) and reduces to \(1/10\) every \(40\) iterations until it is less than \(10^{-6}\). The penalty parameter \(\lambda_{1}\) is \(0.02\) and the initial number of RBFs is \(30000\).
The absolute point-wise error \(|u^{S}-u^{F}|\) is shown in Fig. 9. SRBFNN can recover the solution up to \(10^{-3}\) point-wise accuracy. To further analyze the performance of SRBFNN, we draw the cross-section view of the solution and its derivatives in Fig. 10. Table 11 records three relative errors and the number of RBFs in the final solution. According to Fig. 10 and Table 11, we can observe that SRBFNN achieves a good approximation in terms of the solution and its derivatives.
\begin{table}
\begin{tabular}{l l} \hline Example 5 & Example 6 \\ \hline
**1.560** & **1.544** \\ \hline \end{tabular}
\end{table}
Table 10: The least squares estimation of slope about \(\ln(N)\) and \(\ln(\frac{1}{\varepsilon})\) for Example 5 and Example 6.
Figure 8: The number of basis functions in terms of \(\varepsilon\) in the log-log scale for Example 5 and 6.
### Numerical experiment in 3D
In this section, we consider a three-dimensional multiscale elliptic equation to verify the applicability of SRBFNN. In this scenario, solving multiscale elliptic equations with FDM is challenging. Thus we only consider the SRBFNN solution with a series of \(\varepsilon=0.5,0.2,0.1,0.05\) and take \(u^{0.05}\) as the reference.
**Setting:** The training data is sampled equidistantly with mesh size \(h\)=0.01. \(batchsize\) is 1024 in the domain and 200\(\times\)8 on the boundary. \(MaxNiter\) is 150 and \(SparseNiter\) is 120. The initial learning rate is 0.1 and reduces by 1/10 every 30 iterations. The initial
\begin{table}
\begin{tabular}{c c c c} \hline \(N\) & \(err_{2}\) & \(err_{\infty}\) & \(err_{H_{1}}\) \\ \hline
13213 & 4.427e-3 & 1.249e-2 & 1.581e-1 \\ \hline \end{tabular}
\end{table}
Table 11: Relative errors and the number of basis functions in the final solution for Example 7.
Figure 10: The cross-section view of \(u\), \(\frac{\partial u}{\partial x}\) and \(\frac{\partial u}{\partial y}\) for Example 7. The dotted line represents the FDM solution. The solid line with an asterisk is the SRBFNN solution.
Figure 9: The absolute point-wise error for Example 7.
number of RBFs is 1000, 2000, 5000, 10000, respectively. \(tol_{1}\!=\!0.05\), \(tol_{2}\!=\!10^{-5}\), \(Check_{iter}\!=\!10\). \(\lambda_{1}\),\(\lambda_{2}\),\(\lambda_{3}\) are initialized to 0.1, 50.0, 0.001, respectively.
Example 8Consider the coefficient \(a^{\varepsilon}(x,y,z)\)
\[a^{\varepsilon}(x,y,z)\!=\!2\!+\!\sin(\frac{2\pi x}{\varepsilon})\sin(\frac{2 \pi y}{\varepsilon})\sin(\frac{2\pi z}{\varepsilon}), \tag{4.10}\]
where \(f(x,y,z)\!=\!10.0\), \(g(x,y)\!=\!0.0\). Fig. 11 shows the slices of the SRBFNN solution when \(z\!=\!0.5\). We can see that the range of the solution is gradually the same as \(\varepsilon\) tends to 0.05.
Table 12 and Fig. 12 show the \(L_{2}\) norm between \(u^{0.05}\) and the solutions of other scales. As expected, \(u^{\varepsilon}\) approaches \(u^{0.05}\) and the number of basis functions increase with the scale \(\varepsilon\) tending to 0.05. These results illustrate our method can achieve numerical convergence for three-dimensional multiscale elliptic equations.
## 5 Conslusion
This paper proposes a novel method to solve multiscale elliptic problems with oscillatory coefficients, such as the coefficients with scale separation, discontinuity and multiple scales. Unlike the general DNNs, SRBFNN has a simpler and more efficient architecture, which makes the network converge faster. The activation functions with good local approximation are important for multiscale problems. Furthermore, the \(\ell_{1}\) regularization can deal with the overfitting problem owing to the simplicity of SRBFNN. The experiments show that SRBFNN presents the solution with fewer basis functions and has better approximation accuracy than most other deep learning methods. Finally, it is found that there is a relation between the number of basis functions \(N\) in the final solution and the scale \(\varepsilon\) in multiscale elliptic problems: \(N\!=\!O(\varepsilon^{-\tau n})\), \(n\) is the dimensionality, and \(\tau\) is typically smaller than 1, which is better than classical numerical methods.
There are several interesting issues which deserve further consideration. First, the choice of the initial number of RBFs is essential, affecting the network's accuracy and training. Second, a wise selection of penalty parameters in different loss terms facilitates the training process and provides a better approximation. Another issue is to find a faster sparse optimization algorithm.
## Acknowledgments
This work is supported in part by National Key R&D Program of China via grant No. 2022YFA1005203 and Jiangsu Provincial Key Research and Development Program under Grant BE2022058-4.
Figure 12: The error between the solution when \(\varepsilon\!=\!0.05\) and the solutions with larger scales and the relationship between the number of RBFs and \(\varepsilon\). |
2308.10534 | Universal Approximation of Parametric Optimization via Neural Networks
with Piecewise Linear Policy Approximation | Parametric optimization solves a family of optimization problems as a
function of parameters. It is a critical component in situations where optimal
decision making is repeatedly performed for updated parameter values, but
computation becomes challenging when complex problems need to be solved in
real-time. Therefore, in this study, we present theoretical foundations on
approximating optimal policy of parametric optimization problem through Neural
Networks and derive conditions that allow the Universal Approximation Theorem
to be applied to parametric optimization problems by constructing piecewise
linear policy approximation explicitly. This study fills the gap on formally
analyzing the constructed piecewise linear approximation in terms of
feasibility and optimality and show that Neural Networks (with ReLU
activations) can be valid approximator for this approximation in terms of
generalization and approximation error. Furthermore, based on theoretical
results, we propose a strategy to improve feasibility of approximated solution
and discuss training with suboptimal solutions. | Hyunglip Bae, Jang Ho Kim, Woo Chang Kim | 2023-08-21T07:38:36Z | http://arxiv.org/abs/2308.10534v1 | Universal Approximation of Parametric Optimization via Neural Networks with Piecewise Linear Policy Approximation
###### Abstract
Parametric optimization solves a family of optimization problems as a function of parameters. It is a critical component in situations where optimal decision making is repeatedly performed for updated parameter values, but computation becomes challenging when complex problems need to be solved in real-time. Therefore, in this study, we present theoretical foundations on approximating optimal policy of parametric optimization problem through Neural Networks and derive conditions that allow the Universal Approximation Theorem to be applied to parametric optimization problems by constructing piecewise linear policy approximation explicitly. This study fills the gap on formally analyzing the constructed piecewise linear approximation in terms of feasibility and optimality and show that Neural Networks (with ReLU activations) can be valid approximator for this approximation in terms of generalization and approximation error. Furthermore, based on theoretical results, we propose a strategy to improve feasibility of approximated solution and discuss training with suboptimal solutions.
Neural Networks Universal Approximation Parametric Optimization The Maximum Theorem
## 1 Introduction
Consider a parametric optimization problem parameterized by \(\theta\)
\[\min_{x}\quad f(x,\theta)\quad\text{subject to}\quad x\in C(\theta).\]
where \(x\) is decision variable, \(f\) is objective function and \(C(\theta)\) is feasible region for \(x\). Parametric optimization involves a process of solving a family of optimization problems as a function of parameters. (Nikbakht et al. (2020), Still (2018)). Therefore, it is commonly applied when decisions are made repeatedly as the parameters change, while the fundamental problem structure remains constant over the entire duration. The parameters are often determined by the
environment where decision makers typically can not have control. Therefore, an optimization problem must be solved after observing the current state of the environment over and over. As solving an optimization problem requires certain computational time, it inevitably causes delays between repetitive decisions, especially for large-scale and complex optimization problems.
There are many application fields that parametric optimization plays a significant role, including robotics (Khalaf and Richter (2016)), autonomous vehicle control (Darynia and Prokopiev (2021)), supply chain optimization (Bai and Liu (2016)), and energy system management (Wang et al. (2014)). For example, an autonomous vehicle needs optimal decisions that depend on changes in speed, road conditions, or amount of traffic. Any delay in decision-making for autonomous vehicle control could lead to mishandle or even serious traffic accidents. Similarly, various decisions based on optimization are required for managing responsive and adaptive manufacturing systems. Sequential and interconnected systems magnify the importance of computation speed as well as minimal error. Real-time decision making is also crucial in high frequency trading of financial assets. Delays in trade execution, even for a fraction of a second, can lead to significant losses for financial management firms. These applications clearly highlight the importance of latency issues in parametric optimization.
In situations where a family of optimization problems needs to be solved repeatedly, the following two characteristics can be observed. First, the structure of optimization problems that are solved repeatedly is identical except for input parameters, which means the dependent variable for the optimal policy is input parameters. Second, input parameters and their corresponding optimal policy are accumulated as optimization problems are solved for new input parameters. This second case gives potential for supervised learning. Thus, it is intuitive and beneficial to approximate the mapping from input parameters to the optimal policy via Machine Learning (ML) techniques in that it is efficient and scalable.
Therefore, in this study, we focus on applying Neural Networks (NN) to universally approximate parametric optimization problems. We build theoretical foundations on approximating direct mapping from input parameters to optimal solution through NN universally and derive conditions that allow the Universal Approximation Theorem (UAT) to be applied to parametric optimization problems by constructing piecewise linear policy approximation explicitly. More specifically, we cast single-valued continuous piecewise linear approximation for optimal solution of parametric optimization and analyze it in terms of feasibility and optimality and show that NN with ReLU activations can be valid approximator in terms of generalization and approximation error. There are various works on the expressive power of NN for approximating functions, however, to the best of our knowledge, existing literature lacks theoretical analysis on the applicability of UAT when the target function of NN is the result from parametric optimization problem, and our study is the first to fill this gap.
### Related Work.
The power of NN as a universal approximator has been extensively validated over several decades. Pointedly, initial work on the UAT show that, for any continuous function on a compact set, there exist a feedforward NN with a single hidden layer the uniformly approximates the function arbitrarily well (Hornik et al. (1989); Cybenko (1989); Funahashi (1989); Barron (1993)). Recently, there has been a growing interest in exploring the capability of NN for approximating functions stemmed from the works by Liang and Srikant (2016) and Yarotsky (2017); for more recent developments, see also Yarotsky (2018); Petersen and Voigtlaender (2018); Shaham et al. (2018); Shen et al. (2019); Daubechies et al. (2022); Lu et al. (2021). From theoretical view, Telgarsky (2016) discussed the benefits of depth in NN, which led to various research on arbitrary depth (Lu et al. (2017); Hanin and Sellke (2017); Kidger and Lyons (2020); Sun et al. (2016); Daniely (2017)). There are also numerous extensions of the UAT that is derived from other networks (Baader et al. (2019); Lin and Jegelka (2018)) or is generalized to unbounded activation functions (Sonoda and Murata (2017)), discontinuous activation functions (Leshno et al. (1993)), non-compact domains (Kidger and Lyons (2020), interval approximation (Wang et al. (2022)), distribution approximation (Lu and Lu (2020)) and invariant map (Yarotsky (2022)).
Stability analysis of optimization problems plays an important role in control theory which has been utilized in many applications such as electronic engineering (Wang et al. (2019)), biology (Motee et al. (2012)), and computer science (Bubnicki (2005)). Berge (1963) first proved the Maximum Theorem that provides conditions for the continuity of optimal value function and upper hemicontinuity of optimal policy with respect to its parameters. Since the Maximum Theorem only guarantees upper hemicontinuity of optimal policy, this led to extensions on studying conditions for lower hemicontinuous of optimal policy. Approaches in such literature are largely divided into two main types. Some provide conditions for lower hemicontinuous directly (Robinson and Day (1974); Zhao (1997); Kien* (2005)). On the other hand, others provide conditions by limiting structure to be linear (Bohm (1975); Wets (1985); Zhang and Liu (1990)), quadratic (Lee et al. (2006)), and quasiconvex (Terazono and A. Matani (2015)). Also, there are generalized versions of Maximum Theorem (Walker (1979); Leininger (1984); Ausubel and Deneckere (1993)). We handle these analyses to bridge parametric optimization and NN.
Also, parallel to the development of various optimization methodologies for ML (Wright and Recht (2022), Boyd and Vandenberghe (2004), Sra et al. (2012)), there has been increasing interest both from operations research and computer science communities to solve mathematical optimization problems using ML. The literature on learning parametric optimization shows two main approaches. The first approach is to learn the optimal solution directly from the input parameter by utilizing the existing solution as data (Lillicrap et al. (2015), Mnih et al. (2015), Vinyals et al. (2015), Dai et al. (2017), Li and Malik (2016), Donti et al. (2017)). Although this approach is simple and intuitive, it has the disadvantage of providing solutions that may violate the critical constraints of optimization problems. This prompted second approach that applies ML indirectly as intermediate step. Li et al. (2018) used Graphical Neural Networks to guide a parallelized tree search procedure that rapidly generate a large number of candidate solutions. Agrawal et al. (2020) applied ML to approximate gradient of solution of convex optimization. Misra et al. (2021) and Bertsimas and Stellato (2019) identified optimal active or tight constraint sets. For more recent works, see Bae et al. (2023), Dai et al. (2021), Dumouchelle et al. (2022), Kim et al. (2023). In this paper, we consider the situation where the parametric optimization problems are approximated by NN directly for utilization of UAT.
### Contribution.
In this paper, we derive conditions for UAT to hold for approximating parametric optimization problems. With our derivations, we can specify how to formulate the parametric optimization problem rather than naively hoping that NN will approximate the optimization problem well. The main contribution of the study can be summarized as below.
* We provide sufficient conditions for UAT to hold for optimal policy for continuous parametric optimization problems and these conditions are quite general in that we do not impose convexity or even quasi-convexity of optimization problems. We only exploit the assumptions and results of the Maximum Theorem (Berge (1963)).
* We also address situations when these sufficient conditions are not satisfied. In particular, we define a sampling function and its stability which makes good approximation possible even without the sufficient conditions in original problems. Under the stable sampling function, original problems become reduced problem in which all conditions in main theorem are satisfied.
* We directly link vast amount of literature on NN with approximating optimization problems. There are many literatures linking the specific structure of NN to UAT. However, to our best knowledge, the general connection between the structure of parametric optimization problem and UAT has been scarcely investigated from the theoretical point of view. Our research clarifies such a vague connection by constructing piecewise linear policy approximation for NN.
### Outline.
The remainder of the paper is organized as follows. Preliminaries for deriving our results are included in Section 2, and our main results with piecewise linear policy approximation are presented in Section 3. Section 4 discuss suitability of NN as estimator of our policy approximation. Improving feasibility and training with suboptimal training data are discussed in Section 5. Finally, Section 6 concludes the paper.
## 2 Preliminaries.
In Section 2, we begin by introducing definitions and notations that are necessary for deriving our results. We also formally define the problem and list its assumptions.
### Definitions and Notations.
Parametric optimization takes the form,
\[\min_{x}\quad f(x,\theta)\quad\text{subject to}\quad x\in C(\theta). \tag{1}\]
where \(x\in X\subset\mathbb{R}^{n}\) is the decision variable, \(\theta\in\Theta\subset\mathbb{R}^{k}\) is the parameter, \(f:\mathbb{R}^{n}\times\mathbb{R}^{k}\rightarrow\mathbb{R}\) is the objective function and \(C:\mathbb{R}^{k}\rightrightarrows 2^{\mathbb{R}^{n}}\) is a multivalued mapping, or correspondence, representing the feasible region defined by a set of constraints parameterized by \(\theta\). Let the optimal value function \(f^{*}:\mathbb{R}^{k}\rightarrow\mathbb{R}\) by \(f^{*}(\theta)=\min_{x\in C(\theta)}f(x,\theta)\). We denote the optimal policy correspondence \(C^{*}:\mathbb{R}^{k}\rightrightarrows 2^{\mathbb{R}^{n}}\) by \(C^{*}(\theta)=\arg\min_{x\in C(\theta)}f(x,\theta)=\{x\in C(\theta)|f(x,\theta )=f^{*}(\theta)\}\). An optimal solution \(x^{*}(\theta)\) is an element of \(C^{*}(\theta)\).
For any vector \(x\in\mathbb{R}^{n}\), its norm \(\|x\|\) is defined as the Euclidean norm, \(\|x\|^{2}=\sum_{i=1}^{n}x_{i}^{2}\). For any non-empty set \(X\) of vectors in \(\mathbb{R}^{n}\), the \(\varepsilon\)-neighborhood is represented by \(\mathcal{B}_{\varepsilon}(X)=\{y\in\mathbb{R}^{n}|\;\exists x\in X\text{ s.t. }\|x-y\|<\varepsilon\}\). We define
the stability of correspondence based on the continuity by Hausdorff (2021). While there are different definitions of stability (Berge (1963); Hogan (1973)), the Hausdorff's version is the most general (Zhao (1997)).
**Definition 2.1**.: _Let \(C\) be a correspondence from parameter space \(\Theta\subset\mathbb{R}^{k}\) to \(2^{\mathbb{R}^{n}}\). Then,_
1. \(C\) _is upper hemicontinuous at_ \(\theta_{0}\) _if_ \(\forall\varepsilon>0\)_,_ \(\exists\delta>0\) _s.t._ \(C(\theta)\subset\mathcal{B}_{\varepsilon}(C(\theta_{0})),\forall\theta\in \mathcal{B}_{\delta}(\theta_{0})\)__
2. \(C\) _is lower hemicontinuous at_ \(\theta_{0}\) _if_ \(\forall\varepsilon>0\)_,_ \(\exists\delta>0\) _s.t._ \(C(\theta_{0})\subset\mathcal{B}_{\varepsilon}(C(\theta)),\forall\theta\in \mathcal{B}_{\delta}(\theta_{0})\)__
3. \(C\) _is continuous at_ \(\theta_{0}\) _if_ \(C\) _is both upper and lower hemicontinuous at_ \(\theta_{0}\)__
UAT describes the capability of NN as an approximator. Although there are many variations, the key statement is that a function expressed as NN is dense on the function space of interest. The most classical version of UAT is independently introduced by Hornik et al. (1989). Since we are utilizing the key findings of UAT, we summarize and restate this study as presented in Theorem 2.1, where the term _function_ is written as a _single-valued function_ to distinguish it from a correspondence.
**Theorem 2.1** (Universal Approximation Theorem, restated from Hornik et al. (1989)).: _Let \(f\) be a continuous single-valued function on a compact set \(K\). Then, there exists a feed forward NN with a single hidden layer that uniformly approximates \(f\) to within an arbitrarily \(\varepsilon>0\) on \(K\)._
### The Maximum Theorem.
The Maximum Theorem was presented in Berge (1963), which provides conditions under which the value function is continuous and the optimal policy correspondence is upper hemicontinuous for a parametric optimization problem given by (1). This theorem sets the basis for developing a connection between parametric optimization and UAT. We restate Berge's Maximum Theorem as Theorem 2.2.
**Theorem 2.2** (The Maximum Theorem, restated from Berge (1963)).: _Let \(f:X\times\Theta\rightarrow\mathbb{R}\) be a continuous function on the product \(X\times\Theta\), and \(C:\Theta\rightrightarrows X\) be a compact-valued correspondence s.t. \(C(\theta)\neq\emptyset,\forall\theta\in\Theta\). Define the \(f^{*}(\theta)\) and \(C^{*}(\theta)\) as \(f^{*}(\theta)=\min_{x\in C(\theta)}f(x,\theta)\) and \(C^{*}(\theta)=\operatorname*{arg\,min}_{x\in C(\theta)}f(x,\theta)=\{x\in C( \theta)|f(x,\theta)=f^{*}(\theta)\}\). If \(C\) is continuous (i.e. both upper and lower hemicontinuous) at \(\theta\), then \(f^{*}\) is continuous and \(C^{*}\) is upper hemicontinuous with non-empty and compact values._
### Problem Description.
Our goal is to find the conditions of \(f\) and \(C\) that allows UAT to be applied to approximating the optimal policy correspondence \(C^{*}\). Suppose the optimization problem given by (1) is formulated so that it changes stably as \(\theta\) varies. The key questions are as follows.
1. Is \(C^{*}\) continuous or single-valued function?
2. Are there bounds on errors from approximation, and do they converge to zero?
3. Is NN suitable class for learning of \(C^{*}\)?
Questions (Q1) arise as UAT generally requires continuity and a single-valued function. We analyze (Q1) based on the Maximum Theorem (Berge (1963)), which is one of the most applied theorems in stability theory. To guarantee an acceptable approximation, we construct a target function for optimal policy \(C^{*}\), which is a piecewise linear continuous function and derive conditions where the approximation error converges to zero. This will address question (Q2). Finally, for question (Q3), we represent generalization error and approximation error of NN on learning constructed piecewise linear continuous target function.
### Assumptions.
For problem (1), we assume that the objective function \(f(x,\theta)\) is continuous on the product \(X\times\Theta\), the feasible region \(C(\theta)\) is continuous on \(\Theta\) and \(C(\theta)\) is a non-empty compact set for each \(\theta\in\Theta\). We make assumptions on the training data for optimal policy as well. A training example for parametric optimization is a pair of a parameter and its corresponding optimal solution \((\theta_{i},x^{*}(\theta_{i}))\). Let the training data be the set of examples, \(T=\{(\theta_{i},x^{*}(\theta_{i}))|i=1,\cdots,m\}\). Notice that there can be more than one optimal solution \(x^{*}(\theta_{i})\) for each \(\theta_{i}\). In practice, it is computationally expensive, if not impossible, to obtain the entire set of optimal solutions. In fact, it is difficult even to identify whether there are multiple optimal solutions or not. Therefore, to incorporate such practical aspects, we assume that there exists a solver that can extract exactly one element \(x^{*}(\theta_{i})\) from \(C^{*}(\theta_{i})\) for any given \(\theta_{i}\). However, it does not have control on the choice of
\(x^{*}(\theta_{i})\), so that the optimal solution is obtained in a random manner from \(C^{*}(\theta_{i})\). Moreover, the solver is not able to identify if \(C^{*}(\theta_{i})\) is a singleton or not. It is as if the training data is a discrete sample path from the correspondence \(C^{*}\) indexed by \(\{\theta_{i}|i=1,\cdots,m\}\).
## 3 Piecewise Linear Policy Approximation.
In Section 3, we present our main results about piecewise linear policy approximation. Given the above assumptions on \(f\) and \(C\), the Theorem 2.2 states that \(f^{*}\) is a continuous function, and \(C^{*}\) is a non-empty and compact-valued upper hemicontinuous correspondence. Thus, unlike the value function \(f^{*}\), which guarantees universal approximation, \(C^{*}\) is not a single-valued function and is not even continuous, which requires additional treatments. Before making further steps, we state the following as a special case.
**Corollary 3.1**.: _If \(C^{*}\) is singleton for each \(\theta\), NN universally approximates \(C^{*}\)._
If the optimal solution for (1) is unique for every \(\theta\), its optimal policy is not a correspondence, and reduces to a single-valued function. As upper hemicontinuity implies the continuity for a function, UAT can readily be applied for \(C^{*}\). While this is a special case with a strong assumption, Corollary 3.1 is the ideal case. In general, there can be multiple optimal solutions for some \(\theta\), and, thus, \(C^{*}\) is no longer a single-valued function. But under some conditions on \(C^{*}\), there is possibility to find a continuous function called a _selection_ as defined in Definition 3.1.
**Definition 3.1**.: _Given two sets \(X\) and \(Y\), let \(F\) be a correspondence from \(X\) to \(Y\). A function \(f:X\to Y\) is said to be a selection of \(F\), if \(\forall x\in X,f(x)\in F(x)\)._
**Proposition 3.1** (Existence of a continuous selection).: \(C^{*}\) _has a continuous selection if it is convex-valued and lower hemicontinuous._
Proof.: This is an immediate result of the selection theorem by Michael Michael (1956).
There is a potential issue with Proposition 3.1. Some important classes of optimization, including linear programming problems, do not necessarily have lower hemicontinuous optimal policy correspondence. To illustrate the issues on approximating \(C^{*}\), consider the following linear program with a parameter \(\theta\in[0,2]\),
**Example 3.1**.: \[\min_{x_{1},x_{2}}\quad-\theta x_{1}-x_{2}\quad\text{subject to}\quad x_{1}+x_{2} \leq 1,\quad x_{1}\geq 0,\quad x_{2}\geq 0.\]
The optimal policy correspondence for the problem given by Example 3.1 becomes
\[C^{*}(\theta)=\begin{cases}\{(0,1)\}&\text{for }\theta\in[0,1),\\ \{(x_{1},x_{2})|x_{1}+x_{2}=1,\;x_{1}\geq 0,x_{2}\geq 0\}&\text{for }\theta=1,\\ \{(1,0)\}&\text{for }\theta\in(1,2].\end{cases}\]
As illustrated in Figure 1, \(C^{*}(\theta)\) contains a jump at \(\theta=1\). Thus, it is evident that there is no continuous selection of \(C^{*}(\theta)\). This means that UAT cannot be directly applied, and we need to find a workaround to make it work.
Thus, we drop the assumption that \(C^{*}\) is lower hemicontinuous and only take upper hemicontinuity of \(C^{*}\), which is guaranteed by Theorem 2.2. Since a continuous selection generally does not exist, we construct a new target function. For given training data \(T=\{(\theta_{i},x^{*}(\theta_{i}))|i=1,\cdots,m\}\), we attempt to estimate the optimal solution on the convex hull of \(\theta_{1},\theta_{2},\ldots,\theta_{m}\) denoted as \(Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\). Furthermore, we consider a finite collection \(\mathcal{S}=\{S_{1},\ldots,S_{d}\}\) of subset of \(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\}\subset\mathbb{R}^{k}\) such that
1. For each \(j\in\{1,\ldots,d\}\), \(|S_{j}|=k+1\) i.e. \(Conv(S_{j})\) is \(k\)-dimensional simplex.
2. \(Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})=\bigcup_{j=1}^{d}Conv(S_{j})\).
3. For any non-empty subset \(J\subseteq\{1,2,\ldots,d\}\), \(\bigcap_{j\in J}Conv(S_{j})=Conv(\bigcap_{j\in J}S_{j})\).
This collection is called triangulations (Lee and Santos (2017)). One way of constructing this is lexicographic triangulations introduced by Sturmfels (1991). Given such collection \(\mathcal{S}\), for any \(\theta\in Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\), there exists index \(j_{\theta}\in\{1,\ldots,d\}\) such that \(\theta=\lambda_{1}^{j_{\theta}}\theta_{1}^{j_{\theta}}+\ldots+\lambda_{k+1}^{j _{\theta}}\theta_{k+1}^{j_{\theta}}\) where \(\{\theta_{1}^{j_{\theta}},\ldots,\theta_{k+1}^{j_{\theta}}\}=S_{j_{\theta}}\in \mathcal{S}\) and \(\lambda_{1}^{j_{\theta}},\ldots,\lambda_{k+1}^{j_{\theta}}\geq 0,\lambda_{1}^{j_{ \theta}}+\ldots+\lambda_{k+1}^{j_{\theta}}=1\). Our key approach is to approximate \(x^{*}(\theta)\) as \(\lambda_{1}^{j_{\theta}}x^{*}(\theta_{1}^{j_{\theta}})+\ldots+\lambda_{k+1}^{ j_{\theta}}x^{*}(\theta_{k+1}^{j_{\theta}})\). This approximation is a single-valued by construction, and continuous function by following theorem, so that UAT can be applied.
**Theorem 3.1** (Continuity of Policy Approximation).: _Consider a finite collection \(\mathcal{S}=\{S_{1},\ldots,S_{d}\}\) of subset of \(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\}\subset\mathbb{R}^{k}\) such that_
* _For each_ \(j\in\{1,\ldots,d\}\)_,_ \(|S_{j}|=k+1\) _i.e._ \(Conv(S_{j})\) _is_ \(k\)_-dimensional simplex._
* \(Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})=\bigcup_{j=1}^{d}Conv(S_{j})\)_._
* _For any non-empty subset_ \(J\subseteq\{1,2,\ldots,d\}\)_,_ \(\bigcap_{j\in J}Conv(S_{j})=Conv(\bigcap_{j\in J}S_{j})\)_._
_Denote \(S_{j}=\{\theta_{1}^{j},\ldots,\theta_{k+1}^{j}\}\). For any \(\theta\in Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\), let \(j_{\theta}\) be index of \(\mathcal{S}\) such that \(\theta\in Conv(S_{j_{\theta}})\). Then, the function \(\hat{x}:Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\rightarrow\mathbb{R} ^{n}\)_
\[\hat{x}(\theta)=\lambda_{1}^{j_{\theta}}x^{*}(\theta_{1}^{j_{\theta}})+\ldots+ \lambda_{k+1}^{j_{\theta}}x^{*}(\theta_{k+1}^{j_{\theta}}).\]
_is continuous where \(\lambda_{1}^{j_{\theta}},\ldots,\lambda_{k+1}^{j_{\theta}}\) are weights of convex combination for \(\theta\) i.e. \(\theta=\lambda_{1}^{j_{\theta}}\theta_{1}^{j_{\theta}}+\ldots+\lambda_{k+1}^{ j_{\theta}}\theta_{k+1}^{j_{\theta}}\)_
Proof.: We first prove Lemma 3.1.
**Lemma 3.1**.: _Let \(X_{1},X_{2},\ldots,X_{d}\) be closed subsets of \(\mathbb{R}^{k}\). Suppose that \(f_{j}:X_{j}\rightarrow\mathbb{R}^{n},j=1,\ldots,d\) are continuous functions such that for any nonempty subset \(J\subseteq\{1,2,\ldots,d\}\) and for any \(j_{1},j_{2}\in J\)_
\[f_{j_{1}}\bigcap_{j\in J}X_{j}=f_{j_{2}}|\bigcap_{j\in J}X_{j}\]
_Then, the function_
\[f:\bigcup_{j=1}^{d}X_{j}\rightarrow\mathbb{R}^{n},\quad x\rightarrow\begin{cases} f_{1}(x),&x\in X_{1}\\ f_{2}(x),&x\in X_{2}\\ \vdots\\ f_{d}(x),&x\in X_{d}\end{cases}\]
Figure 1: Optimal policy correspondence of Example 3.1 on \(x_{1}+x_{2}=1\)
is also continuous._
Proof of Lemma 3.1.: Let \(A\subseteq\mathbb{R}^{n}\) be closed in \(\mathbb{R}^{n}\). Because \(f_{j}\) is continuous, \(f_{j}^{-1}(A)\) is closed in \(X_{j}\). More formally, there is \(Y_{j}\subseteq\mathbb{R}^{k}\) such that \(Y_{j}\) is closed in \(\mathbb{R}^{k}\) and \(f^{-1}(A)=Y_{j}\cap X_{j}\). Then, we have
\[f^{-1}(A)=\bigcup_{j=1}^{d}f_{j}^{-1}(A)=\bigcup_{j=1}^{d}(Y_{j}\cap X_{j})=(X _{1}\cup\ldots\cup X_{d})\cap(\bigcap_{Z_{j}\in\{X_{j},Y_{j}\},j=2,\ldots,d}(Y_ {1}\cup Z_{2}\cup\ldots\cup Z_{d})).\]
If follows from \(Y_{j},X_{j}\) are closed in \(\mathbb{R}^{k}\) that \(B:=\bigcap_{Z_{j}\in\{X_{j},Y_{j}\},j=2,\ldots,d}(Y_{1}\cup Z_{2}\cup\ldots \cup Z_{d})\) are closed in \(\mathbb{R}^{k}\). Hence, \(f^{-1}(A)=(X_{1}\cup\ldots\cup X_{d})\cap B\) is closed in \((X_{1}\cup\ldots\cup X_{d})\). Thus, \(f\) is continuous.
Now we prove Theorem 3.1. Define function \(\hat{x}_{j}:Conv(S_{j})\to\mathbb{R}^{n}\) as \(\hat{x}_{j}(\theta)=\lambda_{1}^{j}x^{*}(\theta_{1}^{j})+\ldots+\lambda_{k+1}^ {j}x^{*}(\theta_{k+1}^{j})\) for \(\theta=\lambda_{1}^{j}\theta_{1}^{j}+\ldots+\lambda_{k+1}^{j}\theta_{k+1}^{j}\). Next, we prove the function \(\hat{x}_{j}\) is continuous. For \(\theta=\lambda_{1}^{j}\theta_{1}^{j}+\ldots+\lambda_{k+1}^{j}\theta_{k+1}^{j} \in Conv(S_{j})\), let \(y_{j}(\theta)=(\lambda_{1}^{j},\ldots,\lambda_{k+1}^{j})\). Then, \(y_{j}(\theta)\) is inverse of linear function (which is linear) and \(\hat{x}_{j}(\theta)=y_{j}(\theta)^{T}(x^{*}(\theta_{1}^{j}),\ldots,x^{*}( \theta_{k+1}^{j}))\) is linear function of \(y_{j}(\theta)\). Thus, \(\hat{x}_{j}(\theta)\) is composite function of two linear functions which is continuous. Also, for any nonempty subset \(J\subseteq\{1,2,\ldots,d\}\) and for any \(j_{1},j_{2}\in J\),
\[\hat{x}_{j_{1}}\big{|}\bigcap_{j\in J}Conv(S_{j})=\hat{x}_{j_{2}}\big{|} \bigcap_{j\in J}Conv(S_{j}).\]
since \(\bigcap_{j\in J}Conv(S_{j})=Conv(\bigcap_{j\in J}S_{j})\). Thus, by the Lemma 3.1, the function \(\hat{x}\)
\[\hat{x}:Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\to\mathbb{R}^{n}, \quad\theta\to\begin{cases}\hat{x}_{1}(\theta),&\theta\in Conv(S_{1})\\ \hat{x}_{2}(\theta),&\theta\in Conv(S_{2})\\ \vdots\\ \hat{x}_{d}(\theta),&\theta\in Conv(S_{d}).\end{cases}\]
is continuous function.
An inherent question regarding the function \(\hat{x}\) pertains to the degree of approximation accuracy. We first remark that there exists parametric optimization problem where convergence of errors of \(\hat{x}\) is not guaranteed since \(x^{*}(\theta)\) is arbitrarily chosen from \(C^{*}(\theta)\).
**Example 3.2**.: \[\min_{x}\quad f(x,\theta)=(x-1)^{2}(x-3)^{2}\quad\text{subject to}\quad x\in C( \theta)=\{1,3\}.\]
For the trivial parametric optimization problem given by Example 3.2, the optimal policy correspondence is
\[C^{*}(\theta)=\{1,3\},\forall\theta\]
In this case, our construction always has suboptimality of 1 for some \(\theta\) because \(|f^{*}((\theta_{1}+\theta_{2})/2)-f(2,\ (\theta_{1}+\theta_{2})/2)|=1\) if \((\theta_{1},1)\) and \((\theta_{2},3)\) are sampled with \(\theta_{1}<\theta_{2}\).
In order to determine the suitability of using \(\hat{x}\) for approximating specific parametric optimization problems, we establish metrics to assess the performance of the target function \(\hat{x}\). These metrics are referred to as \(\varepsilon\)-_suboptimality_ and \(\varepsilon\)-_infeasibility_. We present our main development in Theorem 3.2, which states that a constructed function is \(\varepsilon\)-_infeasible_ and \(\varepsilon\)-_suboptimal_ solution for sufficiently dense training data under certain conditions. While this adds some restrictions, it allows applying UAT to parametric optimization and these two conditions can be lifted with a stable sampler, which we further discuss in the subsequent sections.
**Definition 3.2**.: _Let \(f\) be the objective function and \(C\) be the correspondence in the formulation given by (1). Then, for \(\varepsilon>0\),_
1. \(\hat{x}(\theta)\) _is_ \(\varepsilon\)_-infeasible solution if_ \(\hat{x}(\theta)\in\mathcal{B}_{\varepsilon}(C(\theta))\)_._
2. \(\hat{x}(\theta)\) _is_ \(\varepsilon\)_-suboptimal solution if_ \(|f(\hat{x}(\theta),\theta)-f(x^{*}(\theta),\theta)|<\varepsilon\)
**Theorem 3.2** (Convergence Property of Piecewise Linear Policy Approximation).: _Suppose that \(f\), \(C\) satisfy all conditions for Theorem 2.2. Define a training data set \(T=\big{\{}(\theta_{i},x^{*}(\theta_{i}))|\theta_{i}\in\Theta,i=1,\cdots,m\big{\}}\) where \(x^{*}(\theta_{i})\) is an arbitrarily chosen point from \(C^{*}(\theta_{i})\). For \(\theta\in Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\), define \(d(S_{j_{\theta}})=\max\{\|p-q\|:p,q\in Conv(S_{j_{\theta}})\}\) where \(S_{j_{\theta}}\) is element of finite collection \(\mathcal{S}\) in Theorem 3.1. Then, the function \(\hat{x}(\theta)\) in Theorem 3.1 satisfies the followings._
* _If_ \(Conv(C^{*}(\theta))\subseteq C(\theta)\)_, for given_ \(\varepsilon>0\)_,_ \(\exists\delta>0\) _s.t. if_ \(d(S_{j})<\delta\)_,_ \(\hat{x}(\theta)\) _is_ \(\varepsilon\)_-infeasible solution._
* _If_ \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(C^{*}(\theta))\)_, for given_ \(\varepsilon>0\)_,_ \(\exists\delta>0\) _s.t. if_ \(d(S_{j})<\delta\)_,_ \(\hat{x}(\theta)\) _is_ \(\varepsilon\)_-suboptimal solution._
Proof.: (a) Assume \(Conv(C^{*}(\theta))\subseteq C(\theta)\) and let \(\varepsilon>0\). Since \(f\), \(C\) satisfy all conditions for Theorem 2.2, \(C^{*}\) is upper hemicontinuous. Thus, a set
\[\Delta_{\theta}=\{\delta>0:\|\theta-\theta^{\prime}\|<\delta\Longrightarrow C ^{*}(\theta^{\prime})\subseteq\mathcal{B}_{\varepsilon}(C^{*}(\theta))\}.\]
is not empty.
Define \(\delta_{\theta}=\sup\Delta_{\theta}\). Choose \(\delta=\delta_{\theta}\). If \(d(S_{j_{\theta}})<\delta\), \(\|\theta-\theta_{l}^{j_{\theta}}\|<\delta\) i.e. \(C^{*}(\theta_{l}^{j_{\theta}})\subseteq\mathcal{B}_{\varepsilon}(C^{*}( \theta)),\forall l=1,\ldots,k+1\). Then, with the assumption and \(C^{*}(\theta)\subseteq Conv(C^{*}(\theta))\),
\[x^{*}(\theta_{l}^{j_{\theta}})\in C^{*}(\theta_{l}^{j_{\theta}})\subseteq \mathcal{B}_{\varepsilon}(C^{*}(\theta))\subseteq\mathcal{B}_{\varepsilon}( Conv(C^{*}(\theta)))\subseteq\mathcal{B}_{\varepsilon}(C(\theta)),\forall l=1,2, \ldots,k+1.\]
Thus, \(x^{*}(\theta_{l}^{j_{\theta}})\in\mathcal{B}_{\varepsilon}(Conv(C^{*}(\theta)) )\subseteq\mathcal{B}_{\varepsilon}(C(\theta))\). Note that, since \(Conv(C^{*}(\theta))\) is convex set, \(\mathcal{B}_{\varepsilon}(Conv(C^{*}(\theta)))\) is also convex set. This means the convex combination
\[\lambda_{1}^{j_{\theta}}x^{*}(\theta_{1}^{j_{\theta}})+\ldots+\lambda_{k+1}^{ j_{\theta}}x^{*}(\theta_{k+1}^{j_{\theta}}).\]
is in \(\mathcal{B}_{\varepsilon}(Conv(C^{*}(\theta)))\). Thus, \(\hat{x}(\theta)\in B_{\varepsilon}(C(\theta))\).
(b) We first prove Lemma 3.2.
**Lemma 3.2**.: _If \(A\subseteq\mathbb{R}^{n}\) is a compact set, \(Conv(A)\) is also a compact set._
Proof of Lemma 3.2.: Caratheodory Theorem (Danninger-Uchida Danninger-Uchida (2009)) states that each element of the convex hull of \(A\) is a convex combination of \(n+1\) elements of \(A\). By defining \(Simp(n)=\{(w_{0},\ldots,w_{n}):w_{j}\geq 0,w_{0}+\ldots+w_{n}=1\}\) and \(F(a_{0},\ldots,a_{n};\;w_{0},\ldots,w_{n})=\sum_{i}w_{i}a_{i}\), \(Conv(A)\) can be expressed as the image of the compact set \(A^{n+1}\times Simp(n)\) under a continuous map \(F\), and so it is compact.
Now assume \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(C^{*}(\theta))\) and let \(\varepsilon>0\). We first show there exists \(\delta^{\prime}>0\) such that,
\[\inf_{x^{*}\in Conv(C^{*}(\theta))}\|x^{\prime}-x^{*}\|<\delta^{\prime} \Longrightarrow|f(x^{\prime},\theta)-f^{*}(\theta)|<\varepsilon,\forall x^{ \prime}\in\mathcal{B}_{\varepsilon}(C(\theta)).\]
Since \(C\) is compact-valued correspondence, \(\mathcal{B}_{\varepsilon}(C(\theta))\times\theta\) is compact set. Thus, \(f\) is uniformly continuous on \(\mathcal{B}_{\varepsilon}(C(\theta))\times\theta\). Thus, there exist \(\delta^{\prime\prime}>0\) such that,
\[\|y-z\|<\delta^{\prime\prime}\Longrightarrow|f(y,\theta)-f(z,\theta)|< \varepsilon,\forall y,z\in\mathcal{B}_{\varepsilon}(C(\theta)).\]
Choose \(\delta^{\prime}=\delta^{\prime\prime}\). Note that \(C^{*}\) is compact-valued correspondence from Theorem 2.2. Thus, \(Conv(C^{*}(\theta))\) is also compact set from Lemma 3.2. Hence,
\[x^{*}_{min}=\arg\min_{x^{*}\in Conv(C^{*}(\theta))}\|x^{\prime}-x^{*}\|=\arg \min_{x^{*}\in Conv(C^{*}(\theta))}\|x^{\prime}-x^{*}\|.\]
is in \(Conv(C^{*}(\theta))\). Now we have
\[\inf_{x^{*}\in Conv(C^{*}(\theta))}\|x^{\prime}-x^{*}\|<\delta^{\prime} \Longrightarrow\|x^{\prime}-x^{*}_{min}\|<\delta^{\prime}=\delta^{\prime \prime}\Longrightarrow|f(x^{\prime},\theta)-f(x^{*}_{min},\theta)|<\varepsilon.\]
Since \(x^{*}_{min}\in Conv(C^{*}(\theta))\), \(f(x^{*}_{min},\theta)=f^{*}(\theta)\). Thus, \(|f(x^{\prime},\theta)-f^{*}(\theta)|<\varepsilon\)
Now, we prove part (b) of the theorem. From the above statement, there exists \(\delta^{\prime}>0\) such that,
\[\inf_{x^{*}\in Conv(C^{*}(\theta))}\|x^{\prime}-x^{*}\|<\delta^{\prime} \Longrightarrow|f(x^{\prime},\theta)-f^{*}(\theta)|<\varepsilon,\forall x^{ \prime}\in\mathcal{B}_{\varepsilon}(C(\theta)).\]
Also, since \(f\), \(C\) satisfy all conditions for Theorem 2.2, \(C^{*}\) is upper hemicontinuous. Thus, a set
\[\Delta_{\theta}=\{\delta>0|\ if\ \|\theta-\theta^{\prime}\|<\delta,C^{*}(\theta^{ \prime})\subseteq\mathcal{B}^{\prime}_{\delta}(C^{*}(\theta))\}.\]
is not empty.
Define \(\delta_{\theta}=\sup\Delta_{\theta}\). Choose \(\delta=\delta_{\theta}\). If \(d(S_{j})<\delta\), \(\|\theta-\theta_{l}^{ja}\|<\delta\) i.e. \(C^{*}(\theta_{l}^{ja})\subseteq\mathcal{B}_{\delta^{\prime}}(C^{*}(\theta)), \forall l=1,\ldots,k+1\). Then with \(C^{*}(\theta)\subseteq Conv(C^{*}(\theta))\),
\[x^{*}(\theta_{l}^{ja})\in C^{*}(\theta_{l}^{ja})\subseteq\mathcal{B}_{\delta^ {\prime}}(C^{*}(\theta))\subseteq\mathcal{B}_{\delta^{\prime}}(Conv(C^{*}( \theta))),\forall l=1,\ldots,k+1.\]
Since \(Conv(C^{*}(\theta))\) is convex set, \(\mathcal{B}_{\delta^{\prime}}(Conv(C^{*}(\theta)))\) is also convex set. This means the convex combination
\[\hat{x}(\theta)=\lambda_{1}^{ja_{\theta}}x^{*}(\theta_{1}^{ja})+\ldots+\lambda _{k+1}^{ja_{\theta}}x^{*}(\theta_{k+1}^{ja}).\]
is in \(\mathcal{B}_{\delta^{\prime}}(Conv(C^{*}(\theta)))\). Also, note that \(\hat{x}(\theta)\in\mathcal{B}_{\varepsilon}(C(\theta))\) from part (a) since assumption in (b) indicates \(Conv(C^{*}(\theta))\subseteq C^{*}(\theta)\subseteq C(\theta)\). Accordingly, \(|f(\hat{x}(\theta),\theta)-f^{*}(\theta)|<\varepsilon\).
Theorem 3.2 shows that if the problem (1) satisfies the sufficient conditions, the errors on feasibility and optimality of our piecewise linear policy approximation \(\hat{x}(\theta)\) converges to zero. For example, suppose that training data \(T=\{(\theta_{i},x^{*}(\theta_{i}))|\theta_{i}\in\Theta,\ i=1,\cdots,m\}\) is sampled from Example 3.1. With out loss of generality, suppose that \(\theta_{1}\leq\theta_{2}\leq\ldots\leq\theta_{m}\). Then, the finite collection \(\mathcal{S}\) in Theorem 3.1 can be constructed as \(\mathcal{S}=\{S_{j},j=1,\ldots,m-1\}\) where \(S_{j}=\{\theta_{j},\theta_{j+1}\}\). Let \(l\) be an index such that \(\theta_{l}\leq\ 1\leq\theta_{l+1}\). Then \(\hat{x}(\theta)\) is
\[\hat{x}(\theta)=\begin{cases}(0,\ 1)&\theta\leq\theta_{l},\\ (\frac{\theta-\theta_{l}}{\theta_{l+1}-\theta_{l}},\frac{\theta_{l+1}-\theta}{ \theta_{l+1}-\theta_{l}})&\theta_{l}<\theta<\theta_{l+1},\\ (1,\ 0)&\theta_{l+1}\leq\theta.\end{cases}\]
Note that \(\hat{x}(\theta)\) is a feasible solution, and, thus, an \(\varepsilon\)-infeasible solution. We want to further show that \(\hat{x}(\theta)\) is an \(\varepsilon\)-suboptimal solution. It holds that \(f^{*}(\theta)=f(\hat{x}(\theta),\theta)\) for the three cases: \(\theta\leq\theta_{l},\theta\geq\theta_{l+1}\), or \(\theta=1\). For \(\theta_{l}<\theta<1\), \(\varepsilon\)-suboptimality can be shown as below if we choose \(\delta<4\varepsilon\),
\[f^{*}(\theta)-f(\hat{x}(\theta),\theta)=1-\frac{\theta(\theta-\theta_{l})}{ \theta_{l+1}-\theta_{l}}-\frac{\theta_{l+1}-\theta}{\theta_{l+1}-\theta_{l}}\]
\[=\frac{(\theta-\theta_{l})(1-\theta)}{\theta_{l+1}-\theta_{l}}<\frac{(\theta- \theta_{l})(1-\theta)}{1-\theta_{l}}\leq\frac{1-\theta_{l}}{4}<\frac{\delta}{4 }<\varepsilon.\]
Similarly, the same results can be derived for \(1<\theta<\theta_{l+1}\). Thus, \(\hat{x}(\theta)\) is an \(\varepsilon\)-_suboptimal solution_.
### General Case with a Stable Sampling Function.
We have shown that the proposed piecewise linear policy approximation can be a reasonable solution under some assumptions on \(f\) and \(C\) for dense enough training data. In this subsection, a stable sampling function designed to tackle a broader range of parametric optimization problem. Note that Example 3.2 does not satisfy the conditions for Theorem 3.2 since \(Conv(C^{*}(\theta))\nsubseteq C^{*}(\theta)\) and \(Conv(C^{*}(\theta))\nsubseteq C(\theta)\). But even in this case, it may be possible to apply the Theorem 3.2 by sampling data from certain parts of \(C^{*}(\theta)\), which has practical implications since decision makers often understand the nature of parametric optimization problems. Based on this idea, we define notion and stability of a sampling function.
**Definition 3.3**.: _Define a sampling function \(s:\Theta\rightarrow\bigcup_{\theta}C^{*}(\theta)\) as \(s(\theta)=x^{*}(\theta)\). Sampling function \(s\) is stable with respect to \(C^{*}\) if there exists a non-empty, compact, and convex-valued upper hemicontinuous correspondence \(\overline{C^{*}}\) such that \(s(\theta)\in\overline{C^{*}}(\theta)\subseteq C^{*}(\theta)\ \forall\theta\)._
Note that the stable sampling function does not always exist. It depends on formulation of parametric optimization problem. For example, any sampling function in Example 3.1 is stable since \(C^{*}(\theta)\) is convex-valued. In Example 3.2, a sampling function that samples only 1 or 3 is stable, choose \(\overline{C^{*}}\) as \(\{1\}\) or \(\{3\}\). Consider the following modified version of Example 3.2.
**Example 3.3**.: \[\min_{x}\quad f(x,\theta)=(x-1)^{2}(x-3)^{2}-\theta(x-3)\quad\text{subject to}\quad C(\theta)=\{1,3\}.\]
The optimal policy correspondence for the problem given by Example 3.3 is
\[C^{*}(\theta)=\begin{cases}\{1\}&\theta<0,\\ \{1,\ 3\}&\theta=0,\\ \{3\}&\theta>0\end{cases}\]
Note that there are two convex-valued sub correspondences of \(C^{*}(\theta)\), \(C^{1}(\theta)\) and \(C^{2}(\theta)\), which neither is upper hemicontinuous.
\[C^{1}(\theta)=\begin{cases}\{1\}&\theta<0,\\ \{1\}&\theta=0,\\ \{3\}&\theta>0\end{cases}\quad C^{2}(\theta)=\begin{cases}\{1\}&\theta<0,\\ \{3\}&\theta=0,\\ \{3\}&\theta>0\end{cases}\]
The advantage of a stable sampling function is that it makes Theorem 3.2 applicable even to parametric optimization problems that do not satisfy conditions \(Conv(C^{*}(\theta))\subseteq C(\theta)\) and \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(C^{*}(\theta))\). The following theorem shows that these two conditions become trivial if \(C^{*}\) is convex-valued correspondence.
**Theorem 3.3**.: _If \(C^{*}\) is convex-valued correspondence, the followings are hold._
1. \(Conv(C^{*}(\theta))\subseteq C(\theta)\)__
2. \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(C^{*}(\theta))\)__
Proof.: Since \(C^{*}(\theta)\) is convex set, \(Conv(C^{*}(\theta))=C^{*}(\theta)\). With this fact, we have
(a) \(Conv(C^{*}(\theta))=C^{*}(\theta)\subseteq C^{*}(\theta)\subseteq C(\theta)\).
(b) \(C^{*}(\theta)\subseteq C^{*}(\theta)\) implies \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(C^{*}(\theta))\).
If we have a stable sampling function \(s\), \(\overline{C^{*}}(\theta)\) can be considered as a substitute for \(C^{*}(\theta)\). Since \(\overline{C^{*}}(\theta)\) is convex-valued, the sufficient conditions \(Conv(\overline{C^{*}}(\theta))\subseteq C(\theta)\) and \(f(x,\theta)=f^{*}(\theta),\forall x\in Conv(\overline{C^{*}}(\theta))\) become redundant from Theorem 3.3. Furthermore, since \(\overline{C^{*}}(\theta)\) is compact-valued and upper hemicontinuous, the same arguments in proof of Theorem 3.2 are applicable to \(\overline{C^{*}}(\theta)\), so that we can apply Theorem 3.2 to more general parametric optimization.
## 4 Suitability of Neural Networks.
In previous section, we focused on constructing and evaluating the target function. We build up our target function as continuous piecewise linear single-valued function and state its goodness in terms of \(\varepsilon\)-infeasibility and \(\varepsilon\)-suboptimality. In this section, we discuss capability of NN in approximating the target function. It is also possible to obtain a target function directly from training data. However, for a new parameter, this require to find \(k+1\) points and its corresponding weights of the convex combination. This process requires considerable computation cost as the dimension of the parameter space increases, so it is not suitable for real-time decision making. On the other hand, the forward process of passing data to NN requires no additional processing time for finding the \(k+1\) points once it is trained in advance. Then, is NN a really good estimator for our piecewise linear policy approximation? We answer this question by two aspect, generalization error (GE) and approximation error (AE).
In order to define and examine the GE and AE of NN, it is necessary to make choices regarding the architecture and loss function. This paper focuses on a specific type of neural network architecture, namely a feed forward network with rectified linear unit (ReLU) activations. The ReLU activation function is defined as \(ReLU(h)=\max(h,0)\). The layers of a ReLU network, denoted as \(h^{l},\forall l=1,\dots,L\), are defined recursively using the following relation: \(h^{l}=ReLU(W^{l}h^{l-1}+b)\), where \(h^{0}=ReLU(b)\). Here, \(W^{l}\) represents the weight matrices associated with each layer, and they satisfy the constraint that their infinity norms, denoted as \(\|W^{l}\|_{\infty}\), are bounded by \(B_{l}\). Additionally, we assume that \(\|W^{1}\|\) is bounded by \(B_{1}\). To sum up, the set of functions that can be represented by the output at depth \(L\) in a ReLU network is denoted as \(\mathcal{H}^{L}=\{h^{L}(\{W^{l}\}_{l=1}^{L}):\|W^{l}\|_{\infty}\leq B_{l},l\in[1,L],\|W^{1}\|\leq B_{1}\}\). For a loss function of NN, we prove following theorem
**Theorem 4.1**.: _Let \(h(\theta)\) be ReLU NN and let \(\hat{x}(\theta)\) be a constructed piecewise linear approximation._
1. \(\forall\varepsilon>0,\exists\delta>0\) _such that If_ \(\|\hat{x}(\theta)-h(\theta)\|<\delta\) _and_ \(\hat{x}(\theta)\) _is_ \(\varepsilon/2\)_-infeasible solution,_ \(h(\theta)\) _is_ \(\varepsilon\)_-infeasible solution._
2. \(\forall\varepsilon>0,\exists\delta>0\) _such that If_ \(\|\hat{x}(\theta)-h(\theta)\|<\delta\) _and_ \(\hat{x}(\theta)\) _is_ \(\varepsilon/2\)_-suboptimal solution,_ \(h(\theta)\) _is_ \(\varepsilon\)_-suboptimal solution._
Proof.: (a) Since \(\hat{x}(\theta)\) is \(\varepsilon/2\)-infeasible solution, \(\hat{x}(\theta)\in\mathcal{B}_{\varepsilon/2}(C(\theta))\). Choose \(\delta=\varepsilon/2\). Then, since \(\|\hat{x}(\theta)-h(\theta)\|<\delta\),
\[h(\theta)\in\mathcal{B}_{\varepsilon/2}(\hat{x}(\theta))\subseteq\mathcal{B} _{\varepsilon}(C(\theta)).\]
Thus, \(h(\theta)\) is \(\varepsilon\)-infeasible solution
(b) Since \(\hat{x}(\theta)\) is \(\varepsilon/2\)-suboptimal solution, \(|f(\hat{x}(\theta),\theta)-f(x^{*}(\theta),\theta)|<\varepsilon/2\) Also, since \(f\) is continuous, there exists \(\delta^{\prime}>0\) such that,
\[\|\hat{x}(\theta)-h(\theta)\|<\delta^{\prime}\Longrightarrow|f(\hat{x}(\theta ),\theta)-f(h(\theta),\theta)|<\varepsilon/2\]
Choose \(\delta=\delta^{\prime}\). Then,
\[|f(x^{*}(\theta),\theta)-f(h(\theta),\theta)|\]
\[=|f(x^{*}(\theta),\theta)-f(\hat{x}(\theta),\theta)+f(\hat{x}(\theta),\theta) -f(h(\theta),\theta)|\]
\[\leq|f(x^{*}(\theta),\theta)-f(\hat{x}(\theta),\theta)|+|f(\hat{x}(\theta), \theta)-f(h(\theta),\theta)|\]
\[<\varepsilon/2+\varepsilon/2=\varepsilon\]
Thus, \(h(\theta)\) is \(\varepsilon\)-suboptimal solution.
Theorem 4.1 says that if the NN is sufficiently close to our target function \(\hat{x}(\theta)\), NN also gives \(\varepsilon\)-infeasible and \(\varepsilon\)-suboptimal solution. Thus, for a set of \(m\) training examples \(T=\{(\theta_{i},x^{*}(\theta_{i}))\}_{i=1}^{m}\) and ReLU NN \(h\), we suppose that the weights are learned by minimizing a loss function \(\ell\) which is defined as
\[\ell(h(\theta),x^{*}(\theta))=\|h(\theta)-x^{*}(\theta)\|^{2}\]
### Generalization error.
For a hypothesis class \(\mathcal{H}\), the GE is defined as
\[\mathcal{G}(\mathcal{H})=\mathbb{E}_{T}\sup_{h\in\mathcal{H}}(L_{D}(h)-L_{T}(h)).\]
where \(L_{D}(h)=\mathbb{E}_{\theta\sim D}\ell(h(\theta),x^{*}(\theta))\) represents the expected loss when evaluating estimator \(h\) with respect to the underlying parameter distribution \(D\), and \(L_{T}(h)=\frac{1}{m}\sum_{i=1}^{m}\ell(h(\theta_{i}),x^{*}(\theta_{i}))\) denotes the average empirical loss computed over the training set \(T\), consisting of \(m\) data samples. The GE is a global characteristic that pertains to the class of estimators, which evaluates their suitability for given learning problem. A large generalization error indicates that within this class of estimators, there exist estimators that significantly deviate in performance on average between the true loss \(L_{D}\), and the empirical loss \(L_{T}\). Given the loss function \(\ell\), Shultzman et al. (2023) proved the following theorem which gives bound for generalization error of the class of ReLU networks \(\mathcal{H}^{L}\).
**Theorem 4.2** (Bound for GE, restated from Shultzman et al. (2023)).: _Consider the class of feed forward networks of depth-\(L\) with ReLU activations \(\mathcal{H}^{L}=\{h^{L}(\{W^{1}\}_{l=1}^{L}):\|W^{l}\|_{\infty}\leq B_{l},l\in[1,L],\|W^{1}\|\leq B_{1}\}\) and \(m\) training samples. For a loss function \(\ell(h(\theta),x^{*}(\theta))=\|h(\theta)-x^{*}(\theta)\|^{2}\), its generalization error satisfies_
\[\mathcal{G}(\mathcal{H}^{L})\leq 2\frac{\prod_{l=0}^{L}B_{l}}{\sqrt{m}}.\]
Thus, for \(m\) training samples, it can be seen that the GE of \(\mathcal{H}^{L}\) is bounded as \(O(\frac{1}{\sqrt{m}})\) in learning our piecewise linear policy approximation.
### Approximation error.
The AE represents the lowest possible error that an estimator can achieve within a given hypothesis class. It quantifies the amount of error incurred due to the limitation of selecting from a specific class. Unlike generalization error, the approximation error is independent of the size of the sample. For the hypothesis class \(\mathcal{H}\), the AE is defined as
\[\mathcal{A}(\mathcal{H})=\min_{h\in\mathcal{H}}L_{D}(h).\]
Note that our target function \(\hat{x}(\theta)\) is continuous piecewise linear function. It has been shown that any continuous piecewise linear function can be represented by a deep ReLU implementation (Wang and Sun (2005); Arora et al. (2016)), which means that the AE of NN with ReLU activations for our piecewise linear policy approximation is zero. In other words, if \(h^{*}=\arg\min_{h\in\mathcal{H}^{t}}L_{D}(h)\), \(h^{*}(\theta)\) is \(\varepsilon\)-infeasible and \(\varepsilon\)-suboptimal solution.
## 5 Improving Feasibility.
In this section, we address issue that NN may give infeasible solutions for the original problem. This occurs due to the inability of NN to discern the underlying structure of an optimization problem solely through the provided training data. This is a critical issue because NN has been mainly used in high-assurance systems such as autonomous vehicles (Kahn et al. (2017)), aircraft collision avoidance (Julian and Kochenderfer (2017)) and high-frequency trading (Arevalo et al. (2016)).
Therefore, we propose a strategy to improve feasibility and discuss training with suboptimal solutions. In general, it is not easy to get an accurate optimal solution. Most algorithms that solve optimization problems set a certain level of threshold and stop once the threshold is achieved. We demonstrate that it is possible to preserve \(\varepsilon\)-_infeasibility_ even when employing suboptimal solutions for approximating parametric optimization problems, but the optimality of the approximation is inevitably reduced by the suboptimality of the suboptimal solutions.
### Infeasibility and Suboptimality.
Representative approaches to improve feasibility include specifying the structure of NN, such as analyzing the output range of each layer (Dutta et al. (2018)), or obtaining a feasible solution from predictions of NN, such as greedy selection algorithm (Dai et al. (2017)). In our case, we already know that an \(\varepsilon\)-_infeasible_ solution can be obtained from suitable optimization problems. Thus, if more strict feasible solutions than \(\varepsilon\) are used, feasible solutions to the original problems can be obtained. Note that improving feasibility infers suboptimality. Therefore, before demonstrating this strategy, we discuss about feasibility and optimality of our piecewise linear policy approximation with suboptimal solutions. Suboptimal solution sets can be seen as a general case of optimal solution sets. If a suboptimal policy correspondence is also compact valued and upper hemicontinuous, similar arguments can be applied as in Theorem 3.2. Thus, we first show this fact by starting with the following proposition.
**Proposition 5.1**.: _Define \(C^{\gamma}(\theta)\) as \(C^{\gamma}(\theta)=\{x\in C(\theta):|f(x,\theta)-f^{*}(\theta)|\leq\gamma\}\). Then, \(C^{\gamma}(\theta)\) is compact valued upper hemicontinuous correspondence._
Proof.: We first prove Lemma 5.1
**Lemma 5.1**.: _If \(C_{1},C_{2}\!:\!\Theta\rightrightarrows\!X\) are correspondences, \(C_{1}\) is upper hemicontinuous and compact valued, and \(C_{2}\) is closed, then \(C_{1}\cap C_{2}\!:\!\Theta\rightrightarrows\!X\) defined by \((C_{1}\cap C_{2})(\theta)=C_{1}(\theta)\cap C_{2}(\theta)\) is upper hemicontinuous._
Proof of Lemma 5.1.: See Papageorgiou (1997)
To see that \(C^{\gamma}(\theta)\) is compact valued, note that \(f_{\theta}:C(\theta)\rightarrow\mathbb{R}\) is continuous since \(f(x,\theta)\) is continuous. Since \(C^{\gamma}(\theta)\) is closed subset of the compact set \(C(\theta)\), \(C^{\gamma}(\theta)\) is also compact. Finally, since \(C(\theta)\) is compact valued continuous correspondence, it is upper hemicontinuous and compact valued. Thus, by Lemma 5.1, \(C^{\gamma}(\theta)=C(\theta)\cap C^{\gamma}(\theta)\) is upper hemicontinuous.
With Proposition 5.1, the following theorem can be proved similarly as Theorem 3.2, which state our piecewise linear policy approximation become \(\varepsilon\)-infeasible and \((\varepsilon+\gamma)\)-optimality solution under same conditions in Theorem 3.2.
**Theorem 5.1**.: _Suppose that \(f\), \(C\) satisfy all conditions for Theorem 2.2. Define a training data set \(T=\{(\theta_{i},x^{\gamma}(\theta_{i}))|\theta_{i}\in\Theta,i=1,\cdots,m\}\) where \(x^{\gamma}(\theta_{i})\) is an arbitrarily chosen point from \(C^{\gamma}(\theta_{i})\). For \(\theta\in Conv(\{\theta_{1},\theta_{2},\ldots,\theta_{m}\})\), define \(d(S_{j_{\theta}})=\max\{\|p-q\|:p,q\in Conv(S_{j_{\theta}})\}\) where \(S_{j_{\theta}}\) is element of finite collection \(\mathcal{S}\) in Theorem 3.1. Then, the function \(\hat{x}(\theta)=\lambda_{1}^{j_{\theta}}x^{\gamma}(\theta_{1}^{j_{\theta}})+ \ldots+\lambda_{k+1}^{j_{\theta}}x^{\gamma}(\theta_{k+1}^{j_{\theta}})\) satisfies the followings._
1. _If_ \(Conv(C^{\gamma}(\theta))\subseteq C(\theta)\)_, for given_ \(\varepsilon>0\)_, there exists_ \(\delta>0\) _s.t. if_ \(d(S_{j})<\delta\)_,_ \(\hat{x}(\theta)\) _is_ \(\varepsilon\)_-infeasible solution._
2. _If_ \(|f(x,\theta)-f^{*}(\theta)|\leq\gamma,\forall x\in Conv(C^{\gamma}(\theta))\)_, for given_ \(\varepsilon>0\)_, there exists_ \(\delta>0\) _s.t. if_ \(d(S_{j})<\delta\)_,_ \(\hat{x}(\theta)\) _is_ \((\varepsilon+\gamma)\)_-suboptimal solution_
Proof.: (a) Since \(C^{\gamma}\) is upper hemicontinuous, similar to statement for (a) in Theorem 3.2, we get
\[\lambda_{1}^{j\theta}x^{\gamma}(\theta_{1}^{j\theta})+\ldots+\lambda_{k+1}^{j \theta}x^{\gamma}(\theta_{k+1}^{j\theta})\in\mathcal{B}_{\varepsilon}(Conv(C^ {\gamma}(\theta))).\]
With the assumption, \(\hat{x}(\theta)\in\mathcal{B}_{\varepsilon}(Conv(C^{\gamma}(\theta)))\subseteq \mathcal{B}_{\varepsilon}(C(\theta))\).
(b) Assume \(|f(x,\theta)-f^{*}(\theta)|\leq\gamma,\forall x\in Conv(C^{\gamma}(\theta))\) and let \(\varepsilon>0\). Since \(C^{\gamma}\) is upper hemicontinuous and compact valued, similar to statement for (b) in Theorem 3.2, there exists \(\delta^{\prime}>0\) such that,
\[\inf_{x^{\gamma}\in Conv(C^{\gamma}(\theta))}\|x^{\prime}-x^{\gamma}\|<\delta ^{\prime}\Longrightarrow|f(x^{\prime},\theta)-f_{min}^{\gamma}(\theta)|< \varepsilon,\forall x^{\prime}\in\mathcal{B}_{\varepsilon}(C(\theta)).\]
where \(x_{min}^{\gamma}=\arg\min_{x^{\gamma}\in Conv(C^{\gamma}(\theta))}\|x^{\prime} -x^{\gamma}\|\). Since \(x_{min}^{\gamma}\in Conv(C^{\gamma}(\theta))\), we have
\[|f(x^{\prime},\theta)-f^{*}(\theta)|\leq|f(x^{\prime},\theta)-f(x_{min}^{ \gamma},\theta)|+|f(x_{min}^{\gamma},\theta)-f^{*}(\theta)|<\varepsilon+\gamma.\]
Then, similar to (b) of Theorem 3.2, we have
\[\hat{x}(\theta)=\lambda_{1}^{j\theta}x^{\gamma}(\theta_{1}^{j\theta})+\ldots+ \lambda_{k+1}^{j\theta}x^{\gamma}(\theta_{k+1}^{j\theta})\in\mathcal{B}_{ \delta^{\prime}}(Conv(C^{\gamma}(\theta))).\]
Also, note that condition in (b) implies \(Conv(C^{\gamma}(\theta))\subseteq C^{\gamma}(\theta)\subseteq C(\theta)\) which indicates \(\hat{x}(\theta)\in\mathcal{B}_{\varepsilon}(C(\theta))\). Accordingly, \(|f(\hat{x}(\theta),\theta)-f^{*}(\theta)|<\varepsilon+\gamma\).
Now we demonstrate the proposed strategy with linear programming (LP) and quadratic programming (QP). Note that, since the optimal solution set of every convex optimization is convex, convex optimization satisfies the condition for part (a) in Theorem 3.2. Thus, our strategy for improving feasibility can be applied. The formulations of LP and QP are as follows which are modified version with with slightly perturbed right hand side in standard formulation. For each problem, parameters except \(\theta\) and \(t\) were randomly generated and fixed.
For LP,
\[\min_{x}\quad c^{T}x\quad\text{subject to}\quad Ax\leq\theta-t\mathbf{1}, \quad c,x,\theta\in\mathbb{R}^{n},\quad A\in\mathbb{R}^{nxn} \tag{2}\]
For QP,
\[\min_{x}\quad(1/2)x^{T}Px+q^{T}x\quad\text{subject to}\quad Gx\leq\theta-t \mathbf{1},\quad q,x,\theta\in\mathbb{R}^{n},\quad P,G\in\mathbb{R}^{nxn} \tag{3}\]
where \(\mathbf{1}=\begin{bmatrix}1\\ 1\\ \vdots\\ 1\end{bmatrix}\)
Here, \(t\) serves to obtain a solution further inside than the feasible region of the original problem. Problems (2) and (3) are solved for a total of 10,000 times each while slightly increasing the value of \(t\) for each iteration. At this time, 10,000 samples for \(\theta\) are generated from standard normal distribution for each problem. For each \(t\), we trained NN with training pair \(\{\theta,x(\theta)\}\) and calculated the ratio of the feasible approximation of NN to the problem when \(t=0\). As shown in Figure 2, the ratio converges to 100% in every problem, which indicates our strategy guarantees feasibility.
## 6 Conclusion.
In this paper, we build theoretical foundations on approximating direct mapping from input parameters to optimal solution through NN universally and derive conditions that allow the Universal Approximation Theorem to be applied to parametric optimization problems by constructing piecewise linear policy approximation explicitly. More specifically, we cast single-valued continuous piecewise linear approximation for optimal solution of parametric optimization and analyze it in terms of feasibility and optimality and show that NN with ReLU activations can be valid approximator in terms of generalization and approximation error. Moreover, we propose strategy to improve feasibility and discuss on the suboptimal training data, findings from this study can directly benefit solving parametric optimization problems in real-time control systems or high-assurance systems. In future research, we plan to extend our theory to more general parametric optimization problems such as integer programming, and study more approaches for addressing infeasibility of approximated solutions. |
2310.12169 | Enhanced Graph Neural Networks with Ego-Centric Spectral Subgraph
Embeddings Augmentation | Graph Neural Networks (GNNs) have shown remarkable merit in performing
various learning-based tasks in complex networks. The superior performance of
GNNs often correlates with the availability and quality of node-level features
in the input networks. However, for many network applications, such node-level
information may be missing or unreliable, thereby limiting the applicability
and efficacy of GNNs. To address this limitation, we present a novel approach
denoted as Ego-centric Spectral subGraph Embedding Augmentation (ESGEA), which
aims to enhance and design node features, particularly in scenarios where
information is lacking. Our method leverages the topological structure of the
local subgraph to create topology-aware node features. The subgraph features
are generated using an efficient spectral graph embedding technique, and they
serve as node features that capture the local topological organization of the
network. The explicit node features, if present, are then enhanced with the
subgraph embeddings in order to improve the overall performance. ESGEA is
compatible with any GNN-based architecture and is effective even in the absence
of node features. We evaluate the proposed method in a social network graph
classification task where node attributes are unavailable, as well as in a node
classification task where node features are corrupted or even absent. The
evaluation results on seven datasets and eight baseline models indicate up to a
10% improvement in AUC and a 7% improvement in accuracy for graph and node
classification tasks, respectively. | Anwar Said, Mudassir Shabbir, Tyler Derr, Waseem Abbas, Xenofon Koutsoukos | 2023-10-10T14:57:29Z | http://arxiv.org/abs/2310.12169v1 | # Enhanced Graph Neural Networks with Ego-Centric Spectral Subgraph Embeddings Augmentation
###### Abstract
Graph Neural Networks (GNNs) have shown remarkable merit in performing various learning-based tasks in complex networks. The superior performance of GNNs often correlates with the availability and quality of node-level features in the input networks. However, for many network applications, such node-level information may be missing or unreliable, thereby limiting the applicability and efficacy of GNNs. To address this limitation, we present a novel approach denoted as Ego-centric Spectral subGraph Embedding Augmentation (ESGEA), which aims to enhance and design node features, particularly in scenarios where information is lacking. Our method leverages the topological structure of the local subgraph to create topology-aware node features. The subgraph features are generated using an efficient spectral graph embedding technique, and they serve as node features that capture the local topological organization of the network. The explicit node features, if present, are then enhanced with the subgraph embeddings in order to improve the overall performance. ESGEA is compatible with any GNN-based architecture and is effective even in the absence of node features. We evaluate the proposed method in a social network graph classification task where node attributes are unavailable, as well as in a node classification task where node features are corrupted or even absent. The evaluation results on seven datasets and eight baseline models indicate up to a 10% improvement in AUC and a 7% improvement in accuracy for graph and node classification tasks, respectively.
Graph Neural Networks, Subgraph Spectral Embeddings, Graph Descriptors, Abnormal Features
## I Introduction
Graph representation learning has proved crucial to several real-world applications, including drug discovery & development [37], weather and traffic forecasting [11], recommendation in e-commerce [42], combinatorial optimization [4], etc. In the past few years, there has been a surge of interest in designing graph neural networks (GNNs), which are powerful tools for learning from graph-structured data [6, 15, 20]. GNNs have attained state-of-the-art performance on a variety of downstream Machine Learning (ML) tasks, such as node classification, graph classification, graph regression, and link prediction [31, 45]. For example, predicting the toxicity or property of molecules, item recommendations in an e-commerce website, and identifying users' communities in social networks [9, 43].
One key advantage of GNNs over manually engineered embeddings is the ability to learn a correlation between information specific to a node and its _global_ position in the network. Explicit node features often play an integral role in the performance of GNNs since they encode valuable distinguishing criteria about the entities. For instance, in molecular network data, node features provide crucial information about the chemical nature of the element. Similarly, in citation networks, node features provide textual information on represented publications and contribute significantly to the model's overall performance. When this node-level information is unavailable, current methods use ad hoc techniques such as random vectors or vectors of ones. Numerous recent approaches also use one-hot degree encoding [44]. However, this is rarely a suitable substitute for the explicit node's features. In Figure 1 (a), we illustrate this by comparing the performance of explicit node features and one-hot-degree encoding on Graph Convolutional Networks (GCN) [20] with varying numbers of layers. We observe up to 25% improvement in the results for the explicit features compared to the degree encoding. These results evince that one-hot degree encoding significantly degrades the model's performance, necessitating the use of robust techniques to generate expressive node embeddings in graph lacking node features. Specifically, the design of a framework that produces topology-aware node features in situations when node features are unavailable is one goal of this study.
The node features could be missing or abnormal for a number of reasons, such as privacy concerns, human or machine error, adversarial error, and incomplete data entry [26]. For
Figure 1: (a) GCN’s performance on the Facebook dataset [27] using original features and one-hot-degree encoding. (b) performance with varying abnormality/noise ratios in the node features.
instance, in a social network, users may not have completed their profile information, resulting in missing user features. Similarly, not all items in a co-purchase network may have a complete description associated with them. In a transportation network, traffic information coming from sensors may be noisy due to complex dynamics, leading to abnormal node features. We illustrated the effect, in Figure 1 (b) by running Graph Attention Network (GAT) [40], and GCN on varying ratios of random Gaussian noise in the node features on Facebook dataset [27]. We observe a significant decline in performance when noise is injected. Thus, the second objective of this study is to design a mechanism resilient to abnormal node features.
We observe that node features in many social networks originate the local subgraph topology. For example, keeping the number of followers as a node feature in a Facebook-page network reflects the in-degree of the node. At the same time, we can gather sufficient information of the graph's structure from its subgraphs. The reconstructibility conjecture, which holds true for many graph families, asserts that a graph can be exactly constructed from a collection of its (single) vertex-deleted subgraphs [1, 2]. A vertex-deleted subgraph for a vertex \(v\) is obtained by deleting \(v\) from the graph. Even more, if \(B_{k}(v)\) is the set of vertices at most \(k\)-hop from \(v\), and \(G_{k}(v)\) is the subgraph of \(G\) obtained by deleting vertices in \(B_{k}(v)\), then many graph properties can be inferred or a graph can be constructed exactly from a collection of such \(G_{k}(v)\)[21, 23].
Therefore, we propose to use topology-aware subgraph embeddings that are based on the local topology of a network as features of a node. The proposed framework, denoted as Egocentric Spectral subGraph Embeddings Augmentation (ESGEA) allows to design topology-aware node features using expressive graph embedding methods to enhance the existing node features. In applications where node features are unavailable, ESGEA provides a flexible way to produce node features that are expressive enough to obtain quality results. ESGEA consists of four modules: (a) ego-centric subgraph extraction, where \(k-\)hops local subgraphs are extracted for each node, (b), a spectral graph-embedding method is deployed to extract expressive graph representations on each subgraph, (c) feature augmentation module is proposed to augment features, and, (d) GNN learning module is provided to learn nodes/graph representations. The proposed approach is flexible to use any off-the-shelf graph-based embedding with any GNN-based architecture to design models for different applications. Unlike existing subgraph methods that learn node representations with nested subgraphs message passing, the proposed approach is novel in terms of bridging the gap with topology-aware graph descriptors to use with GNNs. Moreover, ESGEA provides a flexible learning framework that can be customized to meet the desired goal. We offer the following contributions:
* We introduce a topological feature augmentation method, Ego-centric Spectral subGraph Embedding Augmentation (ESGEA), that designs new or enhances corrupted/missing node features.
* We introduce a novel framework that offers flexibility for graph representation pipelines by combining spectral graph-based embeddings with GNNs based on ESGEA.
* We evaluate the proposed framework in graph and node classification settings where node features are unavailable or corrupted (e.g., in the presence of varying amounts of noise) and show its effectiveness through extensive experiments.
The structure of this paper is as follows. Section 2 presents an overview of the related work. Section 3 provides the preliminaries and an introduction to a few definitions, while Section 4 details the methodology of the proposed approach. In Section 5, an evaluation of the proposed approach in both graph and node classification setting is provided. Finally, Section 6 concludes the paper and outlines potential future directions.
## II Related Work
Graph Neural Networks (GNNs) have made substantial advancements in learning representations of graph-structured data in recent years [3]. GNNs essentially generalize end-to-end learning from regular grid data such as image, video, and text, to graph-structured data [43]. Unlike deep neural networks, the key idea behind such generalization is the message passing framework that smooths the message with respect to the local neighborhood [14]. The design of message passing are majorly motivated in spatial domain [10, 34] and spectral domain [8, 20]. GNNs literature broadly includes convolutional layers [20, 25], aggregation operators [15], pooling methods [46], and feature augmentation [5, 29, 30].
There is a growing interest in minimizing the vulnerability of GNNs to node feature and graph structure noise in recent years. A few notable works in this direction include [7, 25, 48]. Similarly, numerous approaches for alleviating structural noise in GNN settings have been proposed, including [12, 47]. For further reading, please refer to the comprehensive survey of adversarial attacks on graphs [17, 18].
Unlike the existing techniques, we pursue a novel approach to advance graph learning in social networks through the incorporation of subgraph embeddings, with a primary focus on applications where crucial node features are absent. To this end, we have introduced a learning framework that integrates GNNs with graph descriptors, thereby presenting a comprehensive methodology that is adaptable to all types of graph embeddings and GNN architectures. Our evaluations in both node and graph classification settings show encouraging results.
## III Preliminaries
Let \(G=(V,E,X)\) denote a graph with a set of nodes \(V\), edges \(E\) and a node feature matrix \(X\in\mathbb{R}^{n\times d}\), where \(n\) is the total number of nodes and \(d\) is the dimension of the feature vector associated with each node. We represent the feature vector associated with the node, \(v\), by \(x_{v}\). For some positive integer \(k\), let \(N_{v}(k)\) be the set of nodes in the \(k\)-neighborhood of node \(v\), i.e., nodes that are at most distance \(k\) from \(v\). \(N_{v}(k)\) also includes \(v\). For a fixed \(k\), let \(s_{v}^{k}\), or \(s_{v}\) when \(k\) is clear from context, be an induced subgraph of \(G\) on \(N_{v}(k)\). Let \(\mathcal{S}=\{s_{1},s_{2},\cdots,s_{n}\}\) be the family of all such subgraphs. We refer to \(s_{v}\) as a \(k\)_-order subgraph_ at node \(v\). Let \(L\) indicates the
Laplacian matrix of the graph and \(\Phi\) is the matrix consisting of normalized and mutually orthogonal eigenvectors of \(L\). Let \(\Lambda\) represents the diagonal matrix of the eigenvalues of the Laplacian. We define a subgraph embedding as a function \(\phi\) that extracts a compact and expressive signature of a \(k\)-order subgraph, \(\phi:\mathcal{G}\rightarrow\mathbb{R}^{d^{\prime}}\), where \(\mathcal{G}\) is the family of all finite graphs and \(d^{\prime}\) is the required dimension of latent feature space, which may be different from \(d\).
## IV Ego-centric Spectral subGraph Embedding Augmentation Framework
In this section, we outline the details of our Ego-centric Spectral subGraph Embedding Augmentation (ESGEA) framework. The proposed scheme is divided into four phases: (1) ego-centric subgraph extraction around each node, (2) subgraph embedding design \((\phi)\), (3) feature augmentation and, (4) learning module. In the forthcoming sections, we outline the details of these phases one by one.
### _Ego-Centric Subgraph Extraction_
Locally induced subgraphs capture the topological information of nearby nodes, enabling similar representations for nodes with identical subgraphs. They encode distinct attributes that are not always determined by explicit node features or properties based on the overall topology of the network. Subgraphs feature non-trivial internal structure, border connectivity, and concepts of neighborhood and position in relation to the remainder of the graph. They are, therefore, conducive to learning effective graph representations.
The importance of subgraphs can be further motivated by the famous reconstructibility conjecture, which holds true for many graph families, including but not limited to regular graphs, trees, Eulerian graphs, outer planner graphs, and graphs with at most 9 vertices [1, 2, 36]. The conjecture states the following:
**Conjecture [2, 19, 39]**_A graph with at least three vertices can be constructed uniquely (up to isomorphism) from a collection of its vertex-deleted subgraphs._
In other words, if \(s_{v}\) is a subgraph of \(G=(V,E)\) obtained by deleting vertex \(v\) and its incident edges, then \(s\) has a unique (up to isomorphism) collection of vertex deleted subgraphs \(\{s_{1},s_{2},\cdots,s_{n}\}\). Variants of this conjecture deal with different subgraphs of \(s\) (e.g., [22, 23]). The primary assertion here is that _the topological and structural properties of a graph can ensue from its subgraphs_.
This discussion inspires and motivates the study of subgraphs for graph learning. For our purpose, we generate a \(k\)-order subgraph for each node \(v\), which is essentially a subgraph induced on the nodes that are at most distance \(k\) from node \(v\). Subsequently, we generate an embedding for each node \(v\) based on its \(k\)-order subgraph (as discussed in the next subsection). Depending on a node's structural role or position, the corresponding \(k\)-order subgraph may have different structural features. For instance, as shown in the simplest example \(k\)=1 in Figure 2, \(1\)-order subgraphs of leaf nodes (\(B,E,F\), and \(H\)) are the same (up to isomorphism). We note that non-leaf nodes generate \(1\)-order subgraphs distinct from leaf nodes and as \(k\) increases more nodes are likely to have distinct embeddings.
This subgraph embedding approach improves the learning of GNNs, particularly when abnormal or no node features are provided. Moreover, it enables a fairly broad approach by bridging the gap between GNNs and graph descriptors [38, 41, 32].
### _Design of Subgraph Embeddings_
Designing a subgraph embedding that encode structural information at all scales, along with succinctness and expressivity, is a challenging task. Network Laplacian Spectral Descriptor (NetLSD) [38] was designed to satisfy most of the required properties that a graph descriptor should possess. For instance, the permutation invariance, extracting small, medium and large scale information, and time and memory efficient. NetLSD is based on the idea of diffusion in graphs, for example, how the heat diffuses across the nodes in the graph. The heat diffusion process over the graph at time \(t\) is examined through the _heat kernel_\(H_{t}\), which is the matrix exponential \(H_{t}=e^{-Lt}\), where \(L\) is the graph Laplacian and \(t\) is the time. Since \(L\) can be factorized as, \(L=\Phi\Lambda\Phi^{\top}\), where \(\Phi\) is the matrix consisting of normalized and mutually orthogonal eigenvectors, and \(\Lambda\) is a diagonal matrix consisting of the eigenvalues of \(L\), we obtain the following:
\[H_{t}=e^{-Lt_{i}}=e^{\Phi(-\Lambda t_{i})\Phi^{\top}}=\Phi e^{-\Lambda t_{i}} \Phi^{\top}. \tag{1}\]
where the \(ij^{th}\) entry of \(H_{t}\) indicates the amount of heat transferred from node \(v_{i}\) to \(v_{j}\) at time \(t_{i}\). Similarly, the _wave kernel_, on the other hand, measures the propagation of mechanical waves across the graph. NetLSD is then defined by the traces of the heat or wave kernels at various time intervals: \(h_{t_{i}}=tr(H_{t_{i}})\), which allows extracting more global information over time. In addition, they are permutation- and size-invariant and scale-adaptive. In terms of time complexity, the Laplacian spectrum requires \(O(n^{3})\) time and \(O(n^{2})\) memory, which hinders its scalability. Thus, the authors opted for block Krylov-Schur implementation in SLEPc [16, 33] to compute only \(\lambda\) extreme eigenvalues, thereby reducing the computation time and making NetLSD a fitting choice for the subgraph embedding.
Figure 2: An illustration of constructing the simplest ego-centric subgraph embeddings (with \(1\)-order subgraphs). For each node (indicated in blue), an induced subgraph from its immediate \(1\)-order neighborhood is extracted and embeddings obtained. The obtained embeddings are finally associated with each node, respectively.
### _Feature Aggregation with ESGEA_
Aggregation functions play a crucial role in the representation and modeling of graph-structured data, specifically in the message passing framework, and has received significant attention in the literature [15]. The performance and representational power of the models are significantly influenced by the selection of aggregation functions. For instance, several studies show that _sum_ aggregation allows learning of graph structural properties [44]. Likewise, the _mean_ aggregation is often employed to capture the distribution of the elements under consideration, while the _max_ aggregation is commonly utilized to identify the most representative elements [44]. Given \(x_{v}\) and \(\phi(s_{v})\), we define a general aggregation function as follow:
\[x_{v}^{\prime}=f(x_{v},\phi(s_{v}))\]
\(x_{v}\) and \(\phi(s_{v})\) correspond to the node feature vector and subgraph embeddings, respectively. The function \(f(.)\) can be substituted with aggregation such as _mean, max, min_ or _sum_, depending upon the choice of the method. Because simple aggregations like _mean, max_ and _sum_ may result in the loss of valuable information, several recent studies have suggested using multiple aggregations, feature concatenation [15], aggregation in hierarchical fashion, as well as learnable aggregations [24]. Learnable aggregations typically entail the utilization of different techniques such as multi-layered perceptron or deep neural networks, which are capable of encoding intricate details and nuances of the input representations. Such methods have demonstrated significant potential in achieving superior performance in a wide range of applications, and are thus the subject of extensive research and investigation in the field.
In our proposed framework, we offer an adaptable aggregation module that can integrate any of the current aggregation operators to amalgamate the embeddings obtained from the subgraph to the primary node embeddings. However, it is crucial to take into account the limitations of these operators before deciding on the most suitable aggregation approach. While the use of learnable aggregations has its advantages, it is important to note that these methods can be computationally complex, posing challenges in terms of scalability and efficiency. Additionally, the traditional _mean, max_, and _sum_ aggregation operators may not always be appropriate, especially in cases where the dimensions of the embeddings (\(d^{\prime}\) and \(d\)) are unequal. In such cases, a simple combine function such as feature concatenation is a viable alternative, which can work effectively in all scenarios and potentially yield improved results. Figure 3 illustrates our methodology for extracting subgraph, computing spectral embeddings, and feature augmentation. The illustration shows a graph with three corrupted (in gray) and four complete node features (in green) and highlights the overall subgraph feature extraction and augmentation approach.
### _Graph Learning Module for ESGEA_
This section focuses on _Message Passing Graph Neural Networks (MPNNs)_, which uses an iterative learning approach to acquire graph representations. MPNNs retain a representation vector \(h_{v}^{l}\in\mathbf{R}^{d}\) for each node \(v\in V\) in a given a graph \(G=(V,E,X)\). Note that \(\hat{d}\) is the size of the embedding \(h\), which may vary from \(d\) and \(d^{\prime}\) which are the sizes of the input features and subgraph embeddings respectively. MPNNs are defined as follows:
\[h_{v}^{(0)}=x_{v}\,\forall v\in V \tag{2}\]
\[a_{v}^{(l)}=f_{\textit{AGG}}^{(l)}\left(h_{u}^{(l-1)}|u\in\mathcal{N}(v)\right) \tag{3}\]
\[h_{v}^{(l)}=f_{\textit{UPDATE}}^{(l)}\left(h_{v}^{(l-1)},a_{v}^{(l)}\right) \tag{4}\]
Node features \(h_{v}^{(0)}\) are initialized with the original node features \(x_{v}\) and then the aggregate and update functions are used to update the node features based on its neighbors, and the prior state at every iteration.
In Algorithm 1, we provide a step by step procedure of the proposed framework in a graph classification setting. The algorithm first extracts subgraph embeddings for each node using an input parameter \(k\), which is the depth of the subgraphs. These subgraphs are then supplied to the embedding function to produce graph embeddings. We would like to note that
Figure 3: Illustration of the overall ego-centric subgraph extraction, embeddings and aggregation: (a) is the input graph with normal (green) and abnormal (gray) node features; (b) demonstrates step 1, i.e., the generation of \(k-\)order subgraphs for each node; (c) shows the use a set of graph descriptors for extracting graph embeddings; (d) indicates the corresponding embeddings extracted for each subgraph; and (e) illustrates feature aggregation (concatenation in this case) of subgraphs feature vectors with their original features.
we propose spectral graph embeddings, e.g., NetLSD [38] for generating subgraph embeddings which distinguishes our work from the existing nested subgraphs GNN methods. As described in section IV-B, spectral descriptors are powerful methods for extracting expressive graph embeddings. Nonetheless, our proposed framework is general, thus any embedding method may be used in this step. As we do not consider node features in social networks, we assign subgraph embeddings \(h^{\prime}_{v}\) as node features in the graph classification setting (step 5 and 8). In addition, we provide a generalized MPNNs strategy for learning representations for each graph that may be used for the subsequent ML task. \(f_{AGG}(.)\) and \(f_{UPDATE}(.)\) are generic functions that may be fine-tuned based on the learning architecture of choice.
Similar to the graph classification, node classification is a well-studied problem, particularly in semi-supervised learning. It has numerous applications in several fields, such as online social networks, biological networks, and ecommerce networks. Node classification involves training a model in a supervised or semi-supervised setting that can predict a label for a new unseen node. In a GNN setting, we obtain node representations from the last layer followed by a linear layer to obtain the class label. A loss function is then applied to train the model accordingly. We define the cross entropy loss function as follows that we use for binary classification.
\[\mathcal{L}=-\frac{1}{N}\sum_{j=v}^{N}y_{j}\,log(p_{j})+(1-y_{j})\,log(1-p_{j}) \tag{5}\]
where \(p_{i}\) is the Softmax probability obtained for the data point \(j\), and \(y_{j}\) is the corresponding ground truth value.
To adapt Algorithm 1 to the node classification task, a few modifications can be made. Specifically, the subgraph embeddings generated by the embedding function (step 4) are integrated with the node features using an aggregation function, which is thoroughly explained in IV-C. The resulting aggregated node features are then fed into the learning framework to obtain node representations. In contrast to the graph classification setting, where we process a collection of graphs (step 1 and 7), in this case we iterate solely over the nodes of a single graph and learn node embeddings. Thus, the proposed learning framework provides a flexible MPNNs-based solution for node classification as well.
The Algorithm 1 introduces a novel framework that primarily involves two crucial input parameters impacting the model's performance: (a) the depth of subgraphs \((k)\) and the number of GNN layers \(L\). The combination of these parameters plays a crucial role in feature aggregation and the receptive fields of the models. Figure 4 demonstrates the comparative merits of different combinations of these parameters. A larger value of \(k\) and \(L\) for a given training node can increase the overlap of information obtained from the node's neighborhood. In addition, it increases the model's receptive field and may result in oversmoothing that hinders the model's performance. Similarly, a combination of large and small input values widens the distance between the embeddings and message passing, which may result in a reduction in performance. Conversely, combining both parameters can result in superior performance.
## V Experimental Evaluation
To assess the performance of the proposed framework, here we define the following two research questions.
* **RQ-1** Can the proposed ESGEA framework improve performance in the absence of explicit node features?
* **RQ-2** Does the effectiveness of the proposed ESGEA framework remain intact to improve performance in applications where node features are unreliable?
To answer the first question, we consider a graph classification setting in which node features are unavailable. For the second research question, we consider node classification setting with abnormal node features. The forthcoming sections detail the experimental design and results.
**General Setup:** We ran all the experiments on a \(112\)-core Intel Xeon CPU 2.20 GHz machine with 512 GB of RAM and an Nvidia GPU with 48 GB of memory. Each method is trained on \(80\%\) and tested on \(20\%\) of the dataset. We run all methods with 10 different seeds, and average accuracy and Area Under the Curve (AUC) are reported for node classification and graph classification tasks, respectively. Throughout the experiments, we consider the depth of the subgraphs to be \(3\) and use NetLSD descriptor to construct subgraph embeddings. The source code is made publicly available1.
Footnote 1: ESGEA publicly available code: [https://github.com/Anwar-Said/ESGEA](https://github.com/Anwar-Said/ESGEA)
### _Graph Classification_
When node features are unavailable, current graph classification techniques typically rely on one-hot-degree encoding to provide relevant node features for GNNs in order to improve their performance. However, in the majority of cases, these features do not provide a stronger learning basis for GNNs. Here, we hypothesize that the features generated through the proposed approach improve the performance of models for datasets without node features. We test our hypothesis in the graph classification setting on five datasets where node features are unavailable.
**Datasets:** We consider _Github Stargazers_, _Reddit threads_, _Reddit Binary_, _Deezer Egos_, and _Twitch Egos_ social network datasets in our experimental setup. Due to space limitation, we refer the reader to [28] for further dataset details.
**Baselines:** We consider the following backbone GNN models, including which include the seminal GCN, GraphSAGE,
Figure 4: An illustration of the comparative merits of GNNs and subgraph depth with respect to their receptive fields within the graph and resultant performance reported on LastFM Asia dataset.
and Residual Gated GCN (RGGCN), along with more recent advanced model UniMP and provably more expressive k-GNN.
**Experimental setup:** Initially we generate node features for each node with subgraphs embedding using NetLSD descriptor. The dimension of NetLSD descriptor is set to \(20\) and depth of the subgraphs \(k\) is set to \(2\) (small diameters) and \(3\). We then run each of these models with the generated NetLSD embeddings and one-hot-degree encoding and compare the results. Each model consists of three convolution layers followed by SortPooling layers [46], two 1D convolutions and three MLP layers. The train:test splits ratio was set to \(80:20\), batch size to \(128\), learning rate to \(1e^{-4}\) and the number of epochs was set to \(100\). We consider Area Under the Curve (AUC) similar to [28] as the evaluation metric.
We report the performance comparison in terms of AUC with degree encoding as node features in Figure 5. The results demonstrate that ESGEA vastly outperforms one-hot-degree encoding. More specifically, Residual Gated GCN and \(k\)-GNN demonstrate an encouraging improvement of up to \(10\%\) on the Reddit binary dataset. Similarly, each method has received up to \(4\%\) improvement on the Github StarGazers dataset. ESGEA also outperforms throughout on Twitch Egos and Deezer Egos datasets. These results clearly illustrate the effectiveness of the proposed approach in the applications where node features are unavailable.
### _Node Classification_
In the node classification setting, we evaluate the proposed method in a setting where a certain percentage of node features is corrupted. We apply the proposed approach to enrich the corrupted node features and then trained different GNNs to evaluate the results. In the following sections, we describe the datasets, experimental setup, and results in detail.
**Datasets:** We consider two social network datasets, _Facebook Page-page_ and _LastFM Asia_, for the node classification task. We refer the reader to [27] for further details on these datasets.
**Backbone GNNs:** In our evaluations, we consider four baseline methods: a two-layered MLP, GCN [20], GAT [40], Unified Message Passing (UniMP) [35], and AirGNN [25].
**Setup:** We examine a three-layered architecture for every method implemented in PyTorch Geometric. The number of hidden channels was set to 16, learning rate as \(0.01\), \(k\) and \(L\) equal \(3\) and the weight decay was set to \(1e^{-4}\). Each model was trained for \(200\) epochs with \(50\) epochs as an early stopping criterion. Throughout the experiments, we evaluate all models on \(0\%,10\%,20\%,50\%,80\%\), and \(90\%\) noise setting, i.e., the number represents the percentage of nodes for which original node features were replaced with random features sampled from a Gaussian distribution as done in [25].
**Results:** Table I and II present the classification results
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{**Corruption ratio**} \\ \cline{2-7}
**Model** & **0\%** & **10\%** & **20\%** & **50\%** & **80\%** & **90\%** \\ \hline MLP & 71.76 & 59.50 & 49.43 & 31.13 & 21.62 & 20.24 \\ ESGEA & 71.85 & 60.00 & 50.39 & 30.32 & 20.98 & 20.25 \\ \hline GCN & 86.11 & 84.28 & 82.36 & 74.94 & 67.50 & 66.03 \\ ESGEA & 86.50 & 84.55 & 86.47 & 75.43 & 68.04 & 66.89 \\ \hline GAT & 85.52 & 79.37 & 78.03 & 72.20 & 67.65 & 70.91 \\ ESGEA & 85.72 & 79.52 & 85.72 & 75.42 & 73.21 & 75.24 \\ \hline AirGNN & 86.45 & 85.76 & 85.69 & 82.31 & 79.37 & 77.17 \\ ESGEA & 86.39 & 85.70 & 85.70 & 83.35 & 78.50 & 77.36 \\ \hline UniMP & 87.12 & 85.69 & 82.98 & 77.47 & 72.07 & 71.70 \\ ESGEA & 87.28 & 84.58 & 87.20 & 80.61 & 78.03 & 78.54 \\ \hline \end{tabular}
\end{table}
Table II: Comparison of node classification accuracy on the test set of LastFM Asia dataset across 10 seeded runs1.
Figure 5: Comparison of mean AUC scores (10 runs with different random seeds) against _one-hot-degree-encoding_ (\(deg\)) as node features on five different GNN models in graph classification setting.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline & \multicolumn{6}{c|}{**Corruption ratio**} \\ \cline{2-7}
**Model** & **0\%** & **10\%** & **20\%** & **50\%** & **80\%** & **90\%** \\ \hline MLP & 71.76 & 59.50 & 49.43 & 31.13 & 21.62 & 20.24 \\ ESGEA & 71.85 & 60.00 & 50.39 & 30.32 & 20.98 & 20.25 \\ \hline GCN & 86.11 & 84.28 & 82.36 & 74.94 & 67.50 & 66.03 \\ ESGEA & 86.50 & 84.55 & 86.47 & 75.43 & 68.04 & 66.89 \\ \hline GAT & 85.52 & 79.37 & 78.03 & 72.20 & 67.65 & 70.91 \\ ESGEA & 85.72 & 79.52 & 85.72 & 75.42 & 73.21 & 75.24 \\ \hline AirGNN & 86.45 & 85.76 & 85.69 & 82.31 & 79.37 & 77.17 \\ ESGEA & 86.39 & 85.70 & 85.70 & 83.35 & 78.50 & 77.36 \\ \hline UniMP & 87.12 & 85.69 & 82.98 & 77.47 & 72.07 & 71.70 \\ ESGEA & 87.28 & 84.58 & 87.20 & 80.61 & 78.03 & 78.54 \\ \hline \end{tabular}
\end{table}
Table III: Comparison of node classification accuracy on the test set of LastFM Asia dataset across 10 seeded runs1.
Figure 6: ESGEA with UniMP performance on both Facebook and LastFM Asia datasets. Here we vary the percent of abnormality injected into the original node features. The results are compared with the standard one-hot-degree encoding (denoted as deg).
of our evaluation on both datasets. The proposed framework achieves either comparable or superior performance in every experiment. Specifically, the proposed method with UniMP and GAT models boasts a performance gain of up to \(7\%\) on the Facebook and LastFM Asia datasets. In Figure 6, we visualize the performance of ESGEA in conjunction with UniMP on both Facebook and LastFM Asia datasets. We observe as the abnormality increases, ESGEA is less impacted as degree encoding and maintains stronger performance.
### _Parameters Sensitivity and Runtime_
Graph neural networks that employ subgraph representations are a novel and sophisticated category of expressive learning techniques designed to represent graphs as an amalgamation of subgraphs [13]. The efficacy of these methods is typically contingent upon the depth of the subgraphs as well as the number of GNN layers utilized, both of which constitute hyper-parameters. As the number of layers in MPNNs increases, a common phenomenon known as oversmoothing occurs, whereby the node features begin to converge into indistinguishable vectors. This is evidenced in Figure 1, which shows the performance of GCN on the Facebook dataset. Conversely, while a deeper subgraph results in an enlarged receptive field, it also exacerbates the oversmoothing effect, as node features are smoothed too quickly. Moreover, it also increases the running times of the methods by several folds. It is therefore imperative to make informed selections for these hyper-parameters in order to facilitate the training of the model. To delve deeper into the interplay of these parameters, we conducted an empirical analysis on both the node and graph classification tasks.
**Graph Classification:** We use Reddit threads dataset to analyze \(k-\)GNN with varying numbers of layers and subgraph depths, as illustrated in Figure 7 (a). Our objective was to determine how these parameters affect performance. We observe that a smaller or larger number of GNN layers reduces the model's performance, whereas a layer count of \(3\) or \(4\) yields optimal results. We also assessed the execution time of subgraph embedding and model training on the Deezer egos and Reddit threads datasets, as displayed in Figure 7 (b). These datasets have small diameters averaging at \(3.4\) and \(4.5\). As a consequence of their low diameters, the computation time for subgraph embedding is not extensive and can be completed within \(50\) seconds. Similarly, model training times fall within the range of \(200-300\) seconds. These findings demonstrate that computing embeddings using subgraphs and NetLSD descriptors is scalable on large graphs, without sacrificing the method's efficiency.
**Node Classification:** To assess the effectiveness of the hyper-parameters and analyze the running times, we consider both Last FM Asia and Facebook social networks in a node classification setting. As illustrated in Figure 8 (a), our results indicate increasing both the number of layers and the depth of subgraphs can lead to suboptimal performance. Our analysis suggests a favorable combination of hyper-parameters involves three GNN layers with depth \(k=3\) for optimal results. In Figure 8 (b), we present the running time of the NetLSD descriptor for computing subgraphs of varying depth on the LastFM Asia and Facebook datasets. We also compare the training times of GCN on both datasets. Notably, as the depth of the subgraphs increases, their size grows, which consequently extends the running time of the descriptor as depicted in Figure 8 (b). Nonetheless, we observe that subgraph embeddings for \(k=3\) were calculated within a reasonable \(2\) and \(45\) minutes accordingly.
## VI Conclusion
In this paper, we proposed Ego-centric Spectral subGraph Embedding Augmentation (ESGEA), a novel framework for extracting node features from network topology, which is especially important for settings where networks have missing or unreliable node feature information. Preceding message passing, our framework leverages the node's neighborhood topological structure from spectral graph embeddings obtained on extracted ego-centric \(k\)-order subgraphs to generate informative and expressive node features, which are then augmented to the feature matrix using a suitable aggregation approach. Notably, our framework is flexible and compatible with any GNN-based architecture, and exhibits impressive performance. Our detailed evaluations on seven datasets, and eight baselines, encompassing both node and graph classification settings, illustrate the efficacy and potential of the proposed approach.
Figure 8: Node classification parameter sensitivity and runtime analysis: (a) Performance comparison of Graph attention network on LastFM Asia dataset. (b) running times of subgraph embeddings and training times of GCN on LastFM Asia and Facebook datasets.
Figure 7: Graph classification parameter sensitivity and runtime analysis: (a) Performance comparison of \(k-\)GNN on Reddit threads dataset. (b) running times of subgraph embeddings and training times of GCN on Reddit threads and Deezer egos datasets.
## Acknowledgment
This material is based upon work supported by the National Science Foundation Grant Nos. 2239881, 2325416, and 2325417.
|
2304.03689 | EPINN-NSE: Enhanced Physics-Informed Neural Networks for Solving
Navier-Stokes Equations | Fluid mechanics is a fundamental field in engineering and science. Solving
the Navier-Stokes equation (NSE) is critical for understanding the behavior of
fluids. However, the NSE is a complex partial differential equation that is
difficult to solve, and classical numerical methods can be computationally
expensive. In this paper, we present an innovative approach for solving the NSE
using Physics Informed Neural Networks (PINN) and several novel techniques that
improve their performance. The first model is based on an assumption that
involves approximating the velocity component by employing the derivative of a
stream function. This assumption serves to simplify the system and guarantees
that the velocity adheres to the divergence-free equation. We also developed a
second more flexible model that approximates the solution without any
assumptions. The proposed models can effectively solve two-dimensional NSE.
Moreover, we successfully applied the second model to solve the
three-dimensional NSE. The results show that the models can efficiently and
accurately solve the NSE in three dimensions. These approaches offer several
advantages, including high trainability, flexibility, and efficiency. | Ayoub Farkane, Mounir Ghogho, Mustapha Oudani, Mohamed Boutayeb | 2023-04-07T15:15:51Z | http://arxiv.org/abs/2304.03689v1 | # Epinn-NSE: Enhanced Physics-Informed Neural Networks for Solving Navier-Stokes Equations1
###### Abstract
Fluid mechanics is a fundamental field in engineering and science. Solving the Navier-Stokes equation (NSE) is critical for understanding the behavior of fluids. However, the NSE is a complex partial differential equation that is difficult to solve, and classical numerical methods can be computationally expensive. In this paper, we present an innovative approach for solving the NSE using Physics Informed Neural Networks (PINN) and several novel techniques that improve their performance. The first model is based on an assumption that involves approximating the velocity component by employing the derivative of a stream function. This assumption serves to simplify the system and guarantees that the velocity adheres to the divergence-free equation. We also developed a second more flexible model that approximates the solution without any assumptions. The proposed models can effectively solve two-dimensional NSE. Moreover, we successfully applied the second model to solve the three-dimensional NSE. The results show that the models can efficiently and accurately solve the NSE in three dimensions. These approaches offer several advantages, including high trainability, flexibility, and efficiency.
N : 35Q35, 65M99, 68T05
## 1 Introduction
Fluid flow has a wide range of applications in mechanical and chemical engineering, biological systems, and astrophysics. Essentially, all moving bodies are connected to fluid flow, such as vehicles, airplanes, trucks, trains, birds, dolphins, and so on.
Navier-Stokes equations are Partial Differential Equations (PDEs) that are used to describe the flow of fluids. As with many PDEs, the NSE has no analytical solution, and even in three dimensions, it remains one of the Millennium Prize problems. Classical numerical methods, such as the finite difference method and finite element method (FEM), can produce an approximate solution for the NSE. However, deep learning techniques have recently become popular for solving PDEs. One such method is the physics informed neural network (PINN) [22, 19] which uses a deep neural network (DNN) based on optimization problems or residual loss functions to solve a PDE. Other deep learning techniques, such as the deep Galerkin method (DGM)[25] have also been proposed in the literature for solving PDEs. The DGM is particularly useful for solving high-dimensional free boundary PDEs such as the High-dimensional Hamilton-Jacobi-Bellman. However, DGM's numerical performance for other types of PDEs (elliptic, hyperbolic, and partial-integral differential equations, etc.) remains to be investigated. Different assumptions are considered on the operator of PDEs to guarantee the convergence of this neural network. Another approach to approximating PDEs using a variational form including DNNs is the deep Ritz method (DRM) [27], which reformulates the PDE as an optimization problem with a variational formulation. The DRM is naturally adaptive, less sensitive to the problem dimensionality, and works well with the stochastic gradient descent. Nonetheless, there are a variety
of drawbacks to DRM, such as the problem of convexity, and the issue of local minima and saddle points. Moreover, the DRM has no consistent conclusion about the convergence rate and the consideration of the necessary boundary condition is more complicated than in previous approaches. Additionally, there are some concerns with the network structure and activation function.
The work [5] solves a high dimensional semi-linear PDE with a condition on diffusion generator of the equation. To accomplish this, the method used the solution of a backward stochastic differential equation (BSDE) established in [16], and a time discretization scheme proposed in [2]. The approach connected the PDE and stochastic differential equation by using the deep BSDE. However, the numerical results showed that the method did not converge to a standard network of fully connected layers. Other works are based on the connection between PDEs and stochastic processes through the application of the Feynman-Kac formula [10]. This paper [1] solves a Kolmogorov PDE using deep learning. The Feynman-Kac formula may be extended to the full class of high linear parabolic equations.
Chebyshev neural networks are suggested in [14] to solve two-dimensional (2D) linear PDEs. The approach involves using a single hidden layer feedforward neural network and approximating the solution with the Kronecker product of two Chebyshev polynomials. However, this approach is still far from being suitable for solving nonlinear PDEs. In some relevant papers [20, 21], the nonlinear terms of PDEs were locally linearized in the temporal domain. However, this approach is only suitable for discrete-time domains and may reduce the accuracy of predictions in highly nonlinear systems. In another approach called MAgNet, proposed in [3], a graph neural network was used to forecast a spatially continuous solution of a PDE given a spatial position query. However, MAgNet uses a first-order explicit time-stepping scheme that can be unstable.
It is widely acknowledged that solving nonlinear-dynamic PDEs like the NSE is a challenging task, especially in high dimensions. The interesting paper [22] presented a solution to the 2D NSE using the PINN. In this study, we propose an improved model-based neural network for solving the NSE and compare it to the PINN [22] approach. We also conduct several experiments involving changes in data sizes and hyperparameters. Finally, we apply these enhanced models to solve the three-dimensional (3D) NSE using a test solution.
## 2 Governing Equations
### 2D NSE for Incompressible Flows
The NSE can be used to model various fluid flows [6] such as Couette-Poiseuille flows, Beltrami flows, pipe and cylinder flow, and oscillating plates. Consider the general form of the 2D NSE for an incompressible fluid:
\[\frac{\partial u}{\partial t}+\beta\left(u\frac{\partial u}{ \partial x}+v\frac{\partial u}{\partial y}\right) =-\frac{\partial p}{\partial x}+\nu\left(\frac{\partial^{2}u}{ \partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\right) \tag{1}\] \[\frac{\partial v}{\partial t}+\beta\left(u\frac{\partial v}{ \partial x}+v\frac{\partial v}{\partial y}\right) =-\frac{\partial p}{\partial y}+\nu\left(\frac{\partial^{2}v}{ \partial x^{2}}+\frac{\partial^{2}v}{\partial y^{2}}\right), \tag{2}\]
where \(u\) is the x-component of velocity, \(v\) is the y-component of velocity, \(t\) is time, \(p\) is the flow pressure, \(\nu\) is the fluid viscosity1 and \(\beta\) is a fixed constant.
The solution of 2D NSE must satisfy the incompressibility equation, which requires the fluid velocity to be a divergence-free function:
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}=0 \tag{1}\]
The previous equation describes the mass conservation of fluid.
When dealing with fluid dynamics, it is necessary to define both the initial and boundary conditions in order to fully specify a boundary value problem. The type of boundary condition used depends on the particular fluid phenomenon being modeled. In some cases, the boundary conditions may be Dirichlet boundary conditions, which specify the values of the solution at the boundaries. In other cases, the boundary conditions may be Neumann boundary conditions, which specify the derivatives of the solution at the borders. It is also possible to use mixed boundary conditions, which combine elements of Dirichlet and Neumann boundary conditions. The selection of the appropriate boundary condition is dependent on the specific problem under consideration.
### 3d Nse
The problems (1)-(3) can be extended mathematically to three dimensions using the following form:
\[\frac{\partial u}{\partial t}+\beta\left(u\frac{\partial u}{ \partial x}+v\frac{\partial u}{\partial y}+w\frac{\partial u}{\partial z}\right) =-\frac{\partial p}{\partial x}+\nu\left(\frac{\partial^{2}u}{ \partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{ \partial z^{2}}\right) \tag{2}\] \[\frac{\partial v}{\partial t}+\beta\left(u\frac{\partial v}{ \partial x}+v\frac{\partial v}{\partial y}+w\frac{\partial v}{\partial z}\right) =-\frac{\partial p}{\partial y}+\nu\left(\frac{\partial^{2}v}{ \partial x^{2}}+\frac{\partial^{2}v}{\partial y^{2}}\right)+\frac{\partial^{2 }v}{\partial z^{2}}\] (3) \[\frac{\partial w}{\partial t}+\beta\left(u\frac{\partial w}{ \partial x}+v\frac{\partial w}{\partial y}+w\frac{\partial w}{\partial z}\right) =-\frac{\partial p}{\partial z}+\nu\left(\frac{\partial^{2}w}{ \partial x^{2}}+\frac{\partial^{2}w}{\partial y^{2}}\right)+\frac{\partial^{2 }w}{\partial z^{2}} \tag{4}\]
where the velocity components are \(u\), \(v\), and \(w\). The other parameters are defined as previously. Equation (4) introduced the conservation of momentum equation for the \(w\) velocity component. As previously stated, the solution can be sought within the divergence-free set:
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+\frac{\partial w} {\partial z}=0 \tag{5}\]
Solving this type of PDE can be challenging, but recent advancements in deep learning techniques have demonstrated potential in addressing these problems. One such approach is the PINN approach, which is elaborated on in the next section.
## 3 PINN model
The PINN is a deep learning algorithm that has been designed to solve PDEs. It combines the flexibility and expressiveness of neural networks with the ability to incorporate physical laws and constraints into the model.
### Methodology
The PINN approach consists of two main parts. The first part involves generating a candidate solution, referred to as "\(u\)", at predefined points using a multi-layer perception (MLP). The second part consists of calculating the terms associated with the PDE for the candidate solution and constructing a loss function that is optimized using a gradient descent algorithm. We will examine in greater detail how the methods work using the example below. Consider the following
one-dimensional PDE:
\[\frac{\partial u}{\partial t}+\frac{\partial^{2}u}{\partial x^{2}} =0,\ \ \forall x\in\Omega,\,\forall t\in[0,T]. \tag{1}\] \[u= 0\ \ \text{in}\ \Gamma_{\Omega},\,\forall t\in[0,T]\] (2) \[u(t=0) =u_{0},\ \ \forall x\in\Omega \tag{3}\]
In order to apply the PINN to solve the equation shown above, the neural network takes in \(x\) and \(t\) as inputs and produces a candidate \(u\) as the output. The solution is found through minimizing the following loss function:
\[\text{MSE}_{u}=\text{MSE}_{0}+\text{MSE}_{b}+\text{MSE}_{f}, \tag{4}\]
where:
\[\text{MSE}_{0} =\frac{1}{N_{0}}\sum_{i=1}^{N_{0}}\left|u\left(0,x_{0}^{i}\right) -u_{0}^{i}\right|^{2} \tag{5}\] \[\text{MSE}_{b} =\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}\left(\left|u^{i}\left(t_{b}^{ i},.\right)-u_{b}^{i}\right|^{2}\right.\] (6) \[\text{MSE}_{f} =\frac{1}{N_{f}}\sum_{i=1}^{N_{f}}\left|f\left(t_{f}^{i},x_{f}^{ i}\right)\right|^{2} \tag{7}\]
The set of data points that represent the initial conditions of the problem are given by \(\left\{x_{0}^{i},u_{0}^{i}\right\}_{i=1}^{N_{0}}\), where \(N_{0}\) is the number of data points; the collocation points on the boundary are represented by the set \(\left\{t_{b}^{i}\right\}_{i=1}^{N_{b}}\), where \(N_{b}\) is the number of points; and the set \(\left\{t_{f}^{i},x_{f}^{i}\right\}_{i=1}^{N_{f}}\) denotes the collocation points on \(f(t,x)\).
The mean squared error \(\text{MSE}_{u}\) is a measurement that calculates the average of the squared differences between the predicted and actual values. In the current context, \(\text{MSE}_{0}\) represents the MSE metric specifically associated with the initial conditions of the problem. Similarly, \(\text{MSE}_{b}\) is the MSE metric associated with the boundary conditions, and \(\text{MSE}_{f}\) is a loss function that ensures the solution satisfies the equation within the domain. The function \(f\) is exactly:
\[f(t,x)=\frac{\partial u}{\partial t}+\frac{\partial^{2}u}{\partial x^{2}} \tag{8}\]
The combination of these three loss functions plays a crucial role in obtaining an accurate solution for the problem at hand. This approach can be extended to different types of PDEs, with some exceptions for more complex problems where modifications to the loss function and neural network architectures may be necessary.
### PINN to Solve 2D NSE
A notable study [19] employed the PINN approach to solving the 2D NSE. The authors used an assumption to simplify the equation of incompressibility (3) by using a stream function \(\varphi\) to approximate the \(u\) and \(v\) components:
\[u=\frac{\partial\varphi}{\partial y},\ \ \ v=-\frac{\partial\varphi}{\partial x} \tag{9}\]
The stream function \(\varphi\) is calculated as the neural network's output, such as pressure \(p\). Consider the two functions \(f_{1}\) and \(f_{2}\) such that:
\[f_{1}(t,x,y) = \frac{\partial u}{\partial t}+\beta\left(u\frac{\partial u}{ \partial x}+v\frac{\partial u}{\partial y}\right)+\frac{\partial p}{\partial x }-\nu\left(\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y ^{2}}\right) \tag{11}\] \[f_{2}(t,x,y) = \frac{\partial v}{\partial t}+\beta\left(u\frac{\partial v}{ \partial x}+v\frac{\partial v}{\partial y}\right)+\frac{\partial p}{\partial y }-\nu\left(\frac{\partial^{2}v}{\partial x^{2}}+\frac{\partial^{2}v}{\partial y ^{2}}\right) \tag{10}\]
Using the previously defined functions (10) and (11), the model is trained with the following loss functions:
\[\text{MSE} = \frac{1}{N}\sum_{i=1}^{N}\left(\left|u\left(t^{i},x^{i},y^{i} \right)-u^{i}\right|^{2}+\left|v\left(t^{i},x^{i},y^{i}\right)-v^{i}\right|^{ 2}\right) \tag{12}\] \[\quad+\frac{1}{N}\sum_{i=1}^{N}\left(\left|f_{1}\left(t^{i},x^{ i},y^{i}\right)\right|^{2}+\left|f_{2}\left(t^{i},x^{i},y^{i}\right)\right|^{2} \right), \tag{13}\]
where the \(\left\{t^{i},x^{i},y^{i},u^{i},v^{i}\right\}\) represents accurate data obtained from a PDE simulation or real-world measurements.
The model is trained on data from experiments involving the flow of an incompressible fluid around a circular cylinder. The data is generated using a numerical solver called Nektar++, which discretizes the solution domain into triangular elements and approximates the solution as a linear combination of tenth-order hierarchical, semi-orthogonal Jacobi polynomial expansions. The method described in this section was used to solve the 2D NSE in this work [22]. While the PINN approach has proven advantageous in addressing this complex problem, current research is focused on refining the model and improving its effectiveness. The next section provides a comprehensive overview of the enhanced models, which aim to improve the methodology used to solve the NSE.
Figure 1: Method with the assumption for finding solutions to the 2D NSE using the PINN. The domain is represented by variables \(x\) and \(y\), while \(t\) represents time. The stream function is represented by \(\varphi\), and its partial derivatives with respect to \(x\) and \(y\) are represented by \(\varphi_{x}\) and \(\varphi_{y}\), respectively. “Velocity” involves extracting a candidate solution velocity using the assumption. “MLP” is a multi-layer perceptron. “\(L_{1}\), \(L_{2}\)” are the derivative operators associated with the equation.
## 4 EPINN-NSE Models
The PINN method is based on deep learning techniques and has shown promise in solving various PDEs. However, further research is necessary to improve its application to more complex problems. There are numerous papers in the literature that focus on improving PINN and its various applications. One of the significant challenges that PINN faces is the issue of training time. While the training duration is not a critical concern for simple cases, it can be a significant issue for complex equations, with training potentially taking days rather than hours. In this context, [24] proposed a technique called discretely trained PINNs (DTPINNs) to accelerate the approach by discretizing the PDE and replacing the exact spatial derivatives with high-order numerical discretizations using the finite differences method. Nonetheless, this approach still has a long way to go before it can be applied to problems that involve continuous time.
The PINN approach lacks a theoretical underpinning that ensures the convergence of the model and may be prone to failing to find the appropriate solution. For instance, the work [13] highlighted some instances where PINN fails to find an accurate solution for certain PDE cases, such as the reaction-diffusion and convection problems. The paper discussed various situations where these networks may fail, such as when the network fails to accurately capture the underlying physics of the system being simulated or when the network is not properly trained. Furthermore, the framework in [13] also provides potential solutions to these issues, such as incorporating more physics information into the network or using different training methods.
Solving the 2D NSE presents additional challenges for PINN. For instance, the model introduced in [22] required an extended training duration (more than 8 hours). The assumption (3.9) is based on the derivative relation between the stream function \(\varphi\) and the velocity components respectively \(u\), and \(v\). The factor of learning the pressure without training leads to a higher probability that the model produces inaccurate solutions. Another issue related to neural network learning is that the PINN approach uses full
Figure 2: Model of PINN without using any assumption for solving the 2D NSE.
batch learning, which may increase the probability of getting stuck on saddle points or local minima. Using full batch learning for training the model may present challenges when working with a large dataset, which can negatively impact the results.
This paper proposes improvements to the previously introduced PINN model for solving PDEs. Specifically, we enhance the model described in Section 3.2 by incorporating training on pressure and using advanced deep learning techniques. This involves introducing a pressure loss function to equations (3.12) and (3.13):
\[\frac{1}{N}\sum_{i=1}^{N}\left(\left|p\left(t^{i},x^{i},y^{i}\right)-p^{i} \right|^{2}\right), \tag{4.1}\]
where the \(p\left(t^{i},x^{i},y^{i}\right)\) represents the pressure candidate and \(p^{i}\) pressure obtained from data. The aim of incorporating training on pressure is to enhance the pace and accuracy of learning and to improve the results of approximating pressure.
In addition to those improvements, we develop a novel model that directly approximates the velocity components and pressure without using the assumption (3.9). Canceling this assumption based on the derivatives of the stream function can improve the training time and the overall results. The new model employs a neural network that takes space and time variables as inputs and produces solutions for velocity and pressure. The governing equations are computed using automatic differentiation to determine the terms of the loss function that the network will minimize.
Furthermore, our models incorporate technical improvements in neural network architecture and optimize hyperparameters. Several studies [11, 9, 17] have highlighted the advantages of using mini-batch gradient descent. To take advantage of these benefits, we use mini-batch learning in our approach. This enables us to handle large datasets and make use of parallel processing, leading to faster convergence and lower memory requirements. We employed the Adam optimizer [12], which is less sensitive to the initialization of the model parameters. Compared to other optimization algorithms, Adam is less likely to get stuck in local minima or saddle points. Adam adjusts the learning rate for each parameter based on its past gradients, resulting in faster convergence and improved optimization. To effectively train the models and find the most-effective one, we use the validation loss function to evaluate the models at each epoch and compare the results with previous loss values. Afterward, we save the model that performs best. Moreover, we integrated an early stopping criterion or "patience." This technique seeks to terminate the model training process when it fails to exhibit significant improvement after a predetermined number of iterations.
### Solving the 2D NSE
For this case, we aim to employ both models to solve the problem. The first model based on assumption (3.9) is explained in Figure 1 with adding the techniques introduced previously and the loss function associated with pressure (4.1). In the second approach (Figure 2), the neural network generates a candidate solution \(u\), \(v\), and \(p\). This solution is then evaluated using a loss function. Removing the assumption (3.9) requires the addition of the loss function linked to the incompressibility equation (2.7). The neural network is trained to find the solution and learn the functions \(f_{1}(t,x,y)\) and \(f_{2}(t,x,y)\) by minimizing the following loss
function:
\[\text{MSE} = \text{MSE}_{e}+\text{MSE}_{f}+\text{MSE}_{d} \tag{10}\] \[\text{MSE}_{e} = \frac{1}{N}\sum_{i=1}^{N}\left(\left|u\left(t^{i},x^{i},y^{i} \right)-u^{i}\right|^{2}+\left|v\left(t^{i},x^{i},y^{i}\right)-v^{i}\right|^{2}\right)\] (11) \[\qquad+\frac{1}{N}\sum_{i=1}^{N}\left(\left|p\left(t^{i},x^{i},y ^{i}\right)-p^{i}\right|^{2}\right)\] \[\text{MSE}_{f} = \frac{1}{N}\sum_{i=1}^{N}\left(\left|f_{1}\left(t^{i},x^{i},y^{i} \right)\right|^{2}+\left|f_{2}\left(t^{i},x^{i},y^{i}\right)\right|^{2}\right)\] (12) \[\text{MSE}_{d} = \frac{1}{N}\sum_{i=1}^{N}\left(\left|u_{x}\left(t^{i},x^{i},y^{i} \right)+v_{y}\left(t^{i},x^{i},y^{i}\right)\right|^{2}\right),\]
where \(N\) is the number of collocation points denoted as \(\left\{t^{i},\ x^{i},\ y^{i}\right\}\) and \(u(t^{i},x^{i},y^{i})\), \(v(t^{i},x^{i},y^{i})\) and \(p(t^{i},x^{i},y^{i})\) are the candidate solutions being evaluated. \(u^{i}\), \(v^{i}\), and \(p^{i}\) are the real solutions obtained from a PDE solver or real-world observations.
The loss function specified in equation (10) is employed to measure the difference between the exact and approximate solutions. Additionally, the loss function defined in equation (11) ensures that the solution adheres to the equation, while the loss function outlined in equation (12) represents the divergence-free equation.
### Solving the 3D NSE
The 3D NSE is one of the most difficult PDEs to solve. Only a few studies have used deep learning techniques to tackle this equation, particularly in a 3D problem. Here we cite some papers [15, 23] that have been conducted using numerical methods such as the FEM and finite volume method (FVM) integrated with DNNs. Other works [7, 26, 18] addressed specific cases of the NSE in fluid mechanics problems using PINNs. In this section, we adapted the \(I_{2}\) model to approximate the solution to the 3D problem.
Consider the 3D NSE:
\[\frac{\partial u}{\partial t}+\beta\left(u\frac{\partial u}{ \partial x}+v\frac{\partial u}{\partial y}+w\frac{\partial u}{\partial z} \right)+\frac{\partial p}{\partial x}-\nu\left(\frac{\partial^{2}u}{ \partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{ \partial z^{2}}\right) = h(t,x,y,z) \tag{13}\] \[\frac{\partial v}{\partial t}+\beta\left(u\frac{\partial v}{ \partial x}+v\frac{\partial v}{\partial y}+w\frac{\partial v}{\partial z} \right)+\frac{\partial p}{\partial y}-\nu\left(\frac{\partial^{2}v}{\partial x ^{2}}+\frac{\partial^{2}v}{\partial y^{2}}+\frac{\partial^{2}v}{\partial z^{2} }\right) = g(t,x,y,z)\] (14) \[\frac{\partial w}{\partial t}+\beta\left(u\frac{\partial w}{ \partial x}+v\frac{\partial w}{\partial y}+w\frac{\partial w}{\partial z} \right)+\frac{\partial p}{\partial z}-\nu\left(\frac{\partial^{2}w}{\partial x ^{2}}+\frac{\partial^{2}w}{\partial y^{2}}+\frac{\partial^{2}w}{\partial z^{2} }\right) = k(t,x,y,z),\]
where \((t,x,y,z)\in[0,T]\times\Omega\), such that \(\Omega\subset\mathbb{R}^{3}\) (we assume \(\Omega=[0,1]^{3}\), \(T=20\)). The functions \(h\), \(g\), and \(k\) represent the force components applied to the system under consideration, While the other variables are as defined in Section 2. The solution sought satisfies:
\[\frac{\partial u}{\partial x}+\frac{\partial v}{\partial y}+ \frac{\partial w}{\partial z} = I(t,x,y,z) \tag{15}\]
The homogeneous Dirichlet boundary condition is considered for this problem:
\[u(t,x,y,z)=v(t,x,y,z)=w(t,x,y,z)=0,\ \ \forall(x,y,z)\in\Gamma_{\Omega},\forall t \in[0,T], \tag{16}\]
where \(\Gamma_{\Omega}\) is the boundary of domain \(\Omega\).
For 3D NSE, as illustrated in Figure 3, the neural network inputs consist of the spatial variables \(x,y,z\) and the temporal variable \(t\). The neural network outputs comprise the three velocity components, represented by \(u\), \(v\), and \(w\), as well as the pressure \(p\). Additionally, a similar approach is used in constructing the loss function as the 2D case. In this case, the loss functions are represented as:
\[\text{MSE} =\text{MSE}_{e}+\text{MSE}_{f}+\text{MSE}_{d} \tag{4.11}\] \[\text{MSE}_{e} =\frac{1}{N}\sum_{i=1}^{N}\left(\left|u\left(t^{i},x^{i},y^{i},z^ {i}\right)-u^{i}\right|^{2}+\left|v\left(t^{i},x^{i},y^{i},z^{i}\right)-v^{i} \right|^{2}\right.\] (4.12) \[\qquad+\left.\left|w\left(t^{i},x^{i},y^{i},z^{i}\right)-w^{i} \right|^{2}\right)+\frac{1}{N}\sum_{i=1}^{N}\left(\left|p\left(t^{i},x^{i},y^ {i},z^{i}\right)-p^{i}\right|^{2}\right)\] (4.13) \[\text{MSE}_{f} =\frac{1}{N}\sum_{i=1}^{N}\left(\left|f_{1}\left(t^{i},x^{i},y^{ i},z^{i}\right)\right|^{2}+\left|f_{2}\left(t^{i},x^{i},y^{i},z^{i}\right) \right|^{2}+\left|f_{3}\left(t^{i},x^{i},y^{i},z^{i}\right)\right|^{2}\right)\] (4.14) \[\text{MSE}_{d} =\frac{1}{N}\sum_{i=1}^{N}\left(\left|u_{x}(t^{i},x^{i},y^{i},z^ {i})+v_{y}(t^{i},x^{i},y^{i},z^{i})+w_{z}(t^{i},x^{i},y^{i},z^{i})\right.\] \[\qquad\left.-I(t^{i},x^{i},y^{i},z^{i})\right|^{2}\right)\]
The neural network generates outputs identified as \(u\left(t^{i},x^{i},y^{i},z^{i}\right)\), \(v\left(t^{i},x^{i},y^{i},z^{i}\right)\), \(w\left(t^{i},x^{i},y^{i},z^{i}\right)\), and \(p\left(t^{i},x^{i},y^{i},z^{i}\right)\). On the other hand, \(u^{i}\), \(v^{i}\), \(w^{i}\), and \(p^{i}\) represent the true solution obtained from the data. Additionally, \(f_{1}\), \(f_{2}\), and \(f_{3}\) are functions defined as follows:
\[f_{1}(t,x,y,z) =\frac{\partial u}{\partial t}+\beta\left(u\frac{\partial u}{ \partial x}+v\frac{\partial u}{\partial y}+w\frac{\partial u}{\partial z} \right)+\frac{\partial p}{\partial x}-\nu\left(\frac{\partial^{2}u}{\partial x ^{2}}+\frac{\partial^{2}u}{\partial y^{2}}+\frac{\partial^{2}u}{\partial z^{2}}\right) \tag{4.15}\] \[\qquad-h(t,x,y,z)\] (4.16) \[f_{2}(t,x,y,z) =\frac{\partial v}{\partial t}+\beta\left(u\frac{\partial v}{ \partial x}+v\frac{\partial v}{\partial y}+w\frac{\partial v}{\partial z} \right)+\frac{\partial p}{\partial y}-\nu\left(\frac{\partial^{2}v}{\partial x ^{2}}+\frac{\partial^{2}v}{\partial y^{2}}+\frac{\partial^{2}v}{\partial z^{2}}\right)\] (4.17) \[\qquad-g(t,x,y,z)\] \[f_{3}(t,x,y,z) =\frac{\partial w}{\partial t}+\beta\left(u\frac{\partial w}{ \partial x}+v\frac{\partial w}{\partial y}+w\frac{\partial w}{\partial z} \right)+\frac{\partial p}{\partial z}-\nu\left(\frac{\partial^{2}w}{\partial x ^{2}}+\frac{\partial^{2}w}{\partial y^{2}}+\frac{\partial^{2}w}{\partial z^{2}}\right)\] \[\qquad-k(t,x,y,z)\]
The dataset used to train and evaluate the model's performance was generated
using a test solution, which is given by:
\[u :=(t,x,y,z)\mapsto\mathrm{e}^{-t}\sin(\pi x)\sin(\pi y)\sin(\pi z) \tag{4.18}\] \[v :=(t,x,y,z)\mapsto\mathrm{e}^{-t}\left(x^{2}-x\right)\left(y^{2}-y \right)\left(z^{2}-z\right)\] (4.19) \[w :=(t,x,y,z)\mapsto\mathrm{e}^{-t}\sin(\pi x)\sin(\pi y)\left(z^{2} -z\right)\] (4.20) \[p :=(t,x,y,z)\mapsto\mathrm{e}^{-t}xyz \tag{4.21}\]
## 5 Experimental Results and Analysis
In this section, we assess the performance of the improved models in solving the NSE in both 2D and 3D spaces.
### 2D Problem
#### 5.1.1 Dataset
In this study, the data used to train and validate the model is obtained from a computational fluid dynamics simulation for incompressible fluid flow around a circular cylinder. The simulation was performed using the Nektar++ [4] PDE solver, as described in the paper [22]. The simulation was carried out in a rectangular domain \(\Omega=[-15,25]\times[-8,8]\), with a constant velocity applied to the left boundary, zero pressure assumed at the right boundary, and periodic conditions applied to the top and bottom boundaries. The simulation is based on the Reynolds number, which is calculated as the ratio of the free stream velocity, cylinder diameter, and kinematic viscosity (\(R=\frac{U_{\infty}L}{\nu}\), assumed \(U_{\infty}=1\), \(L=1\), \(\nu=0.01\) ). There are 5000 points in the data set, with each point representing 200 time moments, and it includes values for the velocity components \(u\) and \(v\), as well as the pressure \(p\). The data is presented as a collection of \(\left\{t^{i},x^{i},y^{i},u^{i},v^{i},p^{i}\right\}\) values.
Figure 3: Approach to solve 3D NSE using the \(I_{2}\) model. \(n\) represents the number of neurons per layer and \(k\) represents the number of hidden layers.
#### 5.1.2 Implementations Details
We conducted several experiments2 to evaluate the model's performance. These experiments involved analyzing various factors that influence the model's ability by examining their loss functions, validation loss function, relative errors of predicted solutions, and the training duration required to learn and predict solutions. The objective of these experiments is to extract subsets of data from the global dataset.
Footnote 2: The implementations were executed using the Tesla T4 GPU.
We consider the following notation:
* \(I_{0}\): represents the initial implementation of PINN established in [22].
* \(I_{1}\): represents the implementation associated with the developed model based on assumption (11).
* \(I_{2}\): represents the implementation of the model without using any assumption.
Initially, we compared the training performance of one of our proposed models with the approach described in [22].
Table 1 presents a comparison between the initial model \(I_{0}\), introduced in paper [22], and one of the models suggested in this paper \(I_{2}\). The model \(I_{2}\) is trained using Pytorch, while the other model uses the TensorFlow \(1.x\). Both models are trained with the same GPU. For training, \(I_{0}\) randomly selected 5000 points from the global dataset. To compare with \(I_{2}\), we choose 8334 points from the global data and take 60% (\(\simeq 5000\)) of this data for training and 20% (\(\simeq 1667\)) for validation. As Table 1 shows, \(I_{2}\) outperforms \(I_{0}\) using just 2000 epochs, with lower training loss and shorter training time; \(I_{0}\) required 200,000 iterations.
Using a full batch for training model \(I_{0}\) can create challenges when trying to conduct additional experiments. While more data can be added to allow for longer training times, this also raises the risk of the optimization problem getting trapped at a saddle point or local minimum. To address this, we decided to implement the same model using mini-batch learning, validation loss, and early stopping criteria to alleviate these concerns. Additionally, we further improved the model, which we refer to as model \(I_{1}\), by including training on pressure.
Table 2 illustrates the common parameters between the two models \(I_{1}\) and \(I_{2}\). These parameters include the neural network's hyperparameters and the optimization parameters. It should be noted that both models were built from scratch using the
\begin{table}
\begin{tabular}{l c c} \hline & \(I_{0}\) & \(I_{2}\) \\ \hline Software library & TF1.x & Pytorch \\ \hline Dataset & 1000000 & 8334 \\ \hline Training data & 5000 & 5000 \\ \hline Validation data & - & 1667 \\ \hline Iterations-Epochs & 200000 & **2000** \\ \hline Batch learning & full-batch & **mini-batch** \\ \hline Batch size & 5000 & 64 \\ \hline Training loss & 2.825\(\times 10^{-1}\) & **1.2\(\times 10^{-2}\)** \\ \hline Validation loss & - & **6.03e-5** \\ \hline Training duration & \(>\)8h & **1h40min** \\ \hline GPU & Yes & Yes \\ \hline \end{tabular}
\end{table}
Table 1: The initial training comparison of \(I_{0}\) and \(I_{2}\).
PyTorch software library.
In order to evaluate the precision of the models, we calculate the Relative Error (RE) between the predicted solution and the actual solution. RE is calculated using the following formula:
\[RE_{u}=\frac{||predicted\ value(u)-true\ value(u)||}{||true\ value(u)||} \tag{5.1}\]
To conduct additional experiments with larger datasets, we increased the batch size for both implementations, as shown in Table 4, to prevent excessive training times. We also have further experiments planned in Tables 5 and 6 to evaluate the impact of major hyperparameters, such as the number of neurons and layers, on the models. For training and evaluating the models, we used a batch size of 256 and an
\begin{table}
\begin{tabular}{l c} \hline
**Parameters** & **Details** \\ \hline Global data size & 5000*200 \\ \hline Hidden layer & 8 \\ \hline Neurons per layer & 20 \\ \hline Activation function & Tanh \\ \hline Optimizer & Adam \\ \hline Initial learning rate & 0.001 \\ \hline Early stopping criteria & 100 \\ \hline Training data size & 60\% \\ \hline Validation data size & 20\% \\ \hline Test data size & 20\% \\ \hline GPU & Yes \\ \hline \end{tabular}
\end{table}
Table 2: Parameters shared in both models \(I_{1}\) and \(I_{2}\).
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Experiment & \multicolumn{2}{c|}{\(E_{1}\)} & \multicolumn{2}{c|}{\(E_{2}\)} & \multicolumn{2}{c}{\(E_{3}\)} \\ \hline Model & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) \\ \hline Data size & \multicolumn{2}{c|}{9000} & \multicolumn{2}{c|}{10000} & \multicolumn{2}{c}{20000} \\ \hline Early stopping criteria & \multicolumn{2}{c|}{} & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{} \\ \hline Batch size & \multicolumn{2}{c|}{} & \multicolumn{2}{c}{256} \\ \hline Training Loss & 1.62e-3 & 1.65e-3 & 1.06e-3 & 1.53e-3 & 6.98e-4 & 1.38e-3 \\ \hline Validation loss & 1.66e-3 & 1.86e-3 & 1.33e-3 & 1.78e-3 & 6.81e-4 & 1.18e-3 \\ \hline \(RE_{u}\) & 2.19e-2 & 2.26e-2 & 2.03e-2 & 2.44e-2 & 1.44e-2 & 1.84e-2 \\ \hline \(RE_{v}\) & 7.07e-2 & 6.14e-2 & 5.59e-2 & 6.45e-2 & 3.99e-2 & 5.12e-2 \\ \hline \(RE_{p}\) & 7.30e-2 & 8.15e-2 & 7.30e-2 & 8.26e-2 & 4.81e-2 & 5.87e-2 \\ \hline Training duration & 1h15min & 28min & 1h45min & 32min & 2h16min & 46min \\ \hline \end{tabular}
\end{table}
Table 3: Numerical results of experiments \(E_{1},E_{2}\) and \(E_{3}\) for solving the 2D NSE using models \(I_{1}\)and \(I_{2}\). Both models are executed with the same early stopping criteria and batch size.
early stopping criterion of 100 epochs as patience, along with the selection of 20,000 collocation points. Based on the numerical results presented in Tables 1, 3, 4 5, and 6, it is evident that the suggested models \(I_{1}\) and \(I_{2}\) outperform other models in solving the 2D NSE. While the \(I_{1}\) model has been considerably enhanced by integrating deep learning techniques, the training process still requires a considerable amount of time. In contrast, the \(I_{2}\) model demonstrates its capability to compete with the \(I_{1}\) model and achieve good results, while requiring a training time that is at most half that of the \(I_{1}\). Furthermore, the results indicate that modifying the architecture and hyperparameters can have a discernible impact on the final results. In fact, as the number of neurons and hidden layers increase, the predicted solution associated with model \(I_{2}\) achieves a higher level of convergence.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Experiment & \multicolumn{2}{c|}{\(E_{7}\)} & \multicolumn{2}{c|}{\(E_{8}\)} & \multicolumn{2}{c}{\(E_{9}\)} \\ \hline Model & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) \\ \hline Hidden layers & \multicolumn{5}{c}{10} \\ \hline Neurons per & \multicolumn{5}{c|}{10} \\ layer & \multicolumn{5}{c|}{10} \\ \hline Training loss & 3.05e-3 & 5.21e-3 & 9.60e-4 & 9.04e-4 & 2.96e-4 & 2.67e-4 \\ \hline Validation loss & 2.93e-3 & 5.00e-3 & 8.95e-4 & 9.00e-4 & 3.19e-4 & 2.62e-4 \\ \hline \(RE_{u}\) & 2.82e-2 & 4.36e-2 & 1.65e-2 & 1.56e-2 & 8.91e-3 & 7.11e-3 \\ \hline \(RE_{v}\) & 9.72e-2 & 1.14e-1 & 4.54e-2 & 4.37e-2 & 2.88e-2 & 2.34e-2 \\ \hline \(RE_{p}\) & 1.19e-1 & 1.41e-1 & 5.60e-2 & 4.84e-2 & 3.08e-2 & 2.78e-2 \\ \hline Training duration & 4h40min & 1h58min & 4h40min & 1h58min & 2h28min & 1h20min \\ \hline \end{tabular}
\end{table}
Table 4: Numerical outcomes obtained from experiments \(E_{4},E_{5}\), and \(E_{6}\) after augmenting the data size and elevating the batch size.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Experiment & \multicolumn{2}{c|}{\(E_{4}\)} & \multicolumn{2}{c|}{\(E_{5}\)} & \multicolumn{2}{c}{\(E_{6}\)} \\ \hline Model & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) \\ \hline Data size & \multicolumn{5}{c|}{30000} & \multicolumn{5}{c|}{40000} & \multicolumn{5}{c}{60000} \\ \hline Early stopping criteria & \multicolumn{5}{c}{100} \\ \hline Batch size & \multicolumn{5}{c}{500} \\ \hline Training Loss & 7.26e-4 & 7.11e-4 & 4.02e-4 & 7.33e-4 & 6.62e-4 & 7.00e-4 \\ \hline Validation loss & 6.85e-4 & 6.82e-4 & 3.69e-4 & 6.41e-4 & 5.44e-4 & 6.09e-4 \\ \hline \(RE_{u}\) & 1.37e-2 & 1.30e-2 & 9.20e-3 & 1.30e-2 & 1.19e-2 & 1.30e-2 \\ \hline \(RE_{v}\) & 3.93e-2 & 3.82e-2 & 2.85e-2 & 3.61e-2 & 3.66e-2 & 3.48e-2 \\ \hline \(RE_{p}\) & 4.53e-2 & 4.30e-2 & 4.29e-2 & 3.99e-2 & 4.23e-2 & 4.10e-2 \\ \hline Training duration & 2h30min & 1h12min & 3h30min & 1h40min & 2h55min & 1h30min \\ \hline \end{tabular}
\end{table}
Table 5: Experiments with fixing the number of hidden layers and changing the number of neurons per layer.
In the next section, we try to solve the non-stationary NSE in a 3D domain using model \(I_{2}\). We selected this approach due to its perceived efficacy in effectively addressing the complexity of the problem at hand.
### 3D Problem
#### 5.2.1 Dataset
In order to construct a model for the 3D problem, a dataset comprising variables \(\left\{x,y,z,t,u,v,w,p\right\}\) is required. In this study, a total of 7318 collocation points were generated in the spatial domain using a popular and freely available mesh generator known as Gmsh [8]. The time interval was discretized into 200 moments. The solution associated with the points was calculated using the equations (4.18)-(4.21).
Afterward, the resulting dataset was rearranged into the form \(\left\{x^{i},y^{i},z^{i},t^{i},u^{i},v^{i},w^{i},p^{i}\right\}\) for all \(1\leq i\leq(7318\times 200)\), which allowed for more in-depth analysis and interpretation of the model's results.
#### 5.2.2 Numerical Results
Due to the complexity of the problem, a substantial dataset was utilized for this experiment3.
Footnote 3: The Tesla A40 GPU was employed to run the implementations.
Figure 4 displays the training and validation losses with respect to the data size. Based on the conducted experiments, we have successfully run the model using 35% of the entire dataset, which comprises over one million examples. Notably, the model's training can be completed in just 3 hours and 30 minutes using 500,000 examples, demonstrating its excellent efficiency. All experiments were conducted using the same parameters listed in Table 2 except the global data size in this case is \(7318\times 200\).
The results presented in Table 7 were obtained using the number of epochs as a stopping criterion, with 2000 epochs chosen as the optimal value for training the model. This number was considered large enough to provide sufficient time for the model to optimize and investigate its performance capabilities. By examining the results shown in Figure 5, we can conclude that the relative errors exhibit a decreasing trend as the data size increases. This observation aligns with the expected behavior of relative errors with increasing data size, as larger datasets enable more precise calculations by providing a greater amount of information.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Experiment & \multicolumn{2}{c|}{\(E_{10}\)} & \multicolumn{2}{c|}{\(E_{11}\)} & \multicolumn{2}{c}{\(E_{12}\)} \\ \hline Model & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) & \(I_{1}\) & \(I_{2}\) \\ \hline Hidden layers & \multicolumn{2}{c|}{12} & \multicolumn{2}{c|}{15} & \multicolumn{2}{c}{18} \\ \hline Neurons per & \multicolumn{2}{c|}{20} \\ layer & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ \hline Training loss & 1.12e-3 & 1.49e-3 & 1.16e-3 & 8.86e-4 & 1.27e-3 & 9.40e-4 \\ \hline Validation loss & 1.07e-3 & 1.38e-3 & 1.10e-3 & 8.50e-4 & 1.11e-3 & 8.69e-4 \\ \hline \(RE_{u}\) & 1.75e-2 & 2.05e-2 & 1.86e-2 & 1.54e-2 & 1.76e-2 & 1.68e-2 \\ \hline \(RE_{v}\) & 5.11e-2 & 5.42e-2 & 5.21e-2 & 4.25e-2 & 5.59e-2 & 4.23e-2 \\ \hline \(RE_{p}\) & 6.47e-2 & 5.85e-2 & 6.18e-2 & 5.08e-2 & 5.90e-2 & 4.96e-2 \\ \hline Training & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} \\ duration & \multicolumn{2}{c|}{2h20min} & \multicolumn{2}{c|}{1h} & \multicolumn{2}{c|}{3h15min} & \multicolumn{2}{c|}{1h43min} & \multicolumn{2}{c|}{4h10min} & \multicolumn{2}{c|}{2h} \\ \hline \end{tabular}
\end{table}
Table 6: Outcomes of fixing the number of neurons per layer and changing the number of layers.
## 6 Conclusions and Perspectives
The NSE represent a highly intricate challenge to solve, given their dynamic and nonlinear nature as PDEs. Recent research has shown that physics-informed neural networks (PINN) offer a promising solution to this challenge. Our study has demonstrated the effectiveness of improved models based on PINN for solving the NSE in 2D and 3D cases. We have shown that these models offer significant advantages over the original PINN approach, including enhanced computational efficiency, more stable convergence, and reduced relative errors. The approach employed in this paper involves utilizing a test solution in the three-dimensional problem for training and evaluating the efficacy of models aimed at solving complex PDEs. This methodology serves as a viable alternative to conducting real experiments or employing PDE solvers, which may prove more costly. These findings suggest that further refinement of this method could open up new opportunities for its application to real-world data from experiments in three dimensions. Overall,
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Experiment & \(E_{1}\) & \(E_{2}\) & \(E_{3}\) & \(E_{4}\) & \(E_{5}\) & \(E_{6}\) \\ \hline Data size & 40000 & 100000 & 200000 & 300000 & 400000 & 500000 \\ \hline Loss function & 2.13e-5 & 7.23e-6 & 3.46e-6 & 2.56e-5 & 1.49e-5 & 5.37e-6 \\ \hline Validation loss & 1.99e-5 & 5.84e-6 & 3.23e-6 & 2.75e-6 & 1.98e-6 & 2.31e-6 \\ \hline \(RE_{u}\) & 2.13e-2 & 1.20e-2 & 1.06e-2 & 1.13e-2 & 6.97e-3 & 5.69e-3 \\ \hline \(RE_{v}\) & 3.89e-1 & 2.50e-1 & 4.56e-1 & 1.94e-1 & 9.65e-2 & 8.19e-2 \\ \hline \(RE_{w}\) & 6.23e-2 & 2.25e-2 & 2.42e-2 & 2.48e-2 & 2.37e-2 & 1.35e-2 \\ \hline \(RE_{p}\) & 1.58e-2 & 8.47e-3 & 4.24e-3 & 4.98e-3 & 3.99e-3 & 2.91e-3 \\ \hline Training duration & 1h5min & 2h36min & 5h27min & 8h & 11h & 13h41min \\ \hline \end{tabular}
\end{table}
Table 7: The outcomes obtained through the implementation of the \(I_{2}\) model for solving the 3D NSE, wherein the number of epochs was employed as a stopping criterion.
Figure 4: Effects of varying dataset size on training and validation loss.
our work highlights the potential of PINN as a powerful tool for tackling complex physical modeling problems and paves the way for exciting future developments in this area.
In addition to the promising results presented in this study, there are several exciting perspectives for future research. One possibility is to explore the potential of PINN in solving more complex problems, such as those involving multiphase flows or turbulence. Another avenue for investigation is to extend the methodology developed in this paper to other types of PDEs, including those with non-linear boundary conditions or time-dependent coefficients. Moreover, the use of PINN could be extended to other areas of physics and engineering, such as solid mechanics, electromagnetics, and quantum mechanics, to name a few.
Another perspective to consider is the potential for combining PINN with other machine learning techniques, such as deep learning or reinforcement learning, to further enhance their capabilities. The integration of PINN with other methods may allow for more efficient and accurate solutions to complex physical problems, while also providing insights into the underlying physical phenomena.
Finally, it is essential to recognize the potential impact of solving PDEs in various fields of science and engineering. The ability to accurately model and predict complex physical phenomena have significant implications for the design of new materials, optimization of manufacturing processes, and the development of new technologies. As such, the continued development and refinement of PDE solvers have the potential to transform how we approach modeling and simulation in various domains, leading to exciting discoveries and advancements.
Figure 5: Relative errors of velocity and pressure components in various experiments of 3D NSE solved using the \(I_{2}\) model. |
2308.13029 | Neural networks in feedback for flow analysis, sensor placement and
control | This work presents a novel methodology for analysis and control of nonlinear
fluid systems using neural networks. The approach is demonstrated on four
different study cases being the Lorenz system, a modified version of the
Kuramoto-Sivashinsky equation, a streamwise-periodic 2D channel flow, and a
confined cylinder flow. Neural networks are trained as models to capture the
complex system dynamics and estimate equilibrium points through a Newton
method, enabled by backpropagation. These neural network surrogate models
(NNSMs) are leveraged to train a second neural network, which is designed to
act as a stabilizing closed-loop controller. The training process employs a
recurrent approach, whereby the NNSM and the neural network controller (NNC)
are chained in closed loop along a finite time horizon. By cycling through
phases of combined random open-loop actuation and closed-loop control, an
iterative training process is introduced to overcome the lack of data near
equilibrium points. This approach improves the accuracy of the models in the
most critical region for achieving stabilization. Through the use of L1
regularization within loss functions, the NNSMs can also guide optimal sensor
placement, reducing the number of sensors from an initial candidate set. The
datasets produced during the iterative training process are also leveraged for
conducting a linear stability analysis through a modified dynamic mode
decomposition approach. The results demonstrate the effectiveness of
computationally inexpensive neural networks in modeling, controlling, and
enabling stability analysis of nonlinear systems, providing insights into the
system behaviour and offering potential for stabilization of complex fluid
systems. | Tarcísio Déda, William Wolf, Scott Dawson | 2023-08-24T18:52:48Z | http://arxiv.org/abs/2308.13029v1 | # Neural networks in feedback for flow analysis, sensor placement and control
###### Abstract
This work presents a novel methodology for analysis and control of nonlinear fluid systems using neural networks. The approach is demonstrated on four different study cases being the Lorenz system, a modified version of the Kuramoto-Sivashinsky equation, a streamwise-periodic 2D channel flow, and a confined cylinder flow. Neural networks are trained as models to capture the complex system dynamics and estimate equilibrium points through a Newton method, enabled by backpropagation. These neural network surrogate models (NNSMs) are leveraged to train a second neural network, which is designed to act as a stabilizing closed-loop controller. The training process employs a recurrent approach, whereby the NNSM and the neural network controller (NNC) are chained in closed loop along a finite time horizon. By cycling through phases of combined random open-loop actuation and closed-loop control, an iterative training process is introduced to overcome the lack of data near equilibrium points. This approach improves the accuracy of the models in the most critical region for achieving stabilization. Through the use of L1 regularization within loss functions, the NNSMs can also guide optimal sensor placement, reducing the number of sensors from an initial candidate set. The datasets produced during the iterative training process are also leveraged for conducting a linear stability analysis through a modified dynamic mode decomposition approach. The results demonstrate the effectiveness of computationally inexpensive neural networks in modeling, controlling, and enabling stability analysis of nonlinear systems, providing insights into the system behaviour and offering potential for stabilization of complex fluid systems.
## 1 Introduction
Neural networks (NNs) are powerful tools for data-driven mathematical modelling, finding a range of applications in the field of fluid mechanics such as flow analysis, optimization and control (Gad-el Hak, 1996; Lee _et al._, 1997; Kutz, 2017; Brunton _et al._, 2020). Recent advances in experimental data sampling, the availability of data from increasingly complex simulations, and the widespread availability of libraries that facilitate the design and training of NNs has encouraged their rapid development and application for the purposes of data compression, model order reduction, flow analysis, turbulence modelling, and feature extraction, all leveraging the power of NNs as general nonlinear function approximators. In the field of turbulence modeling, deep neural networks have been used to improve the prediction capabilities of RANS closure
models (Ling _et al._, 2016; Maulik _et al._, 2019). In the area of flow estimation and reconstruction, Milano & Koumoutsakos (2002) presented one of the first uses of neural networks in fluid mechanics for reconstructing near-wall turbulent flow fields. More recently, for example, Fukami _et al._ (2019) employed convolutional neural networks (CNNs) to accurately reconstruct turbulent flows from coarse flowfield images. In the context of reduced order modeling, Lui & Wolf (2019) combined deep neural networks and proper orthogonal decomposition (POD) to build accurate models of unsteady flows involving transient features, rewriting the Navier-Stokes equations as a set of nonlinear ordinary differential equations. More recently, Miotto & Wolf (2023) employed vision transformers and CNNs to extract quantities of interest such as aerodynamic coefficients and skin friction from flowfield visualizations. These authors demonstrated that the neural network models based on input images were effective in interpolating and extrapolating between flow regimes, offering a promising alternative for sensors in experimental campaigns and for building surrogate models of complex unsteady flows.
In the field of flow control, different techniques based on training NNs are found in the literature to either build a model for enabling control design (Morton _et al._, 2018; Bieker _et al._, 2020) or to directly infer a suitable control law for a given task. Training of NN-based models for control design was conducted by Bieker _et al._ (2020), who applied limited sensor data supported by delay coordinates, and Morton _et al._ (2018), in an approach that modelled the dynamics of a cylinder flow with models trained to learn state mappings to span a Koopman invariant subspace. Regarding model-free approaches, reinforcement learning (RL) can be highlighted as a promising tool for automatic training of control strategies with applications to drag reduction, for example. In their work, Rabault _et al._ (2019), Ren _et al._ (2021) and Castellanos Garcia de Blas _et al._ (2022) developed RL strategies to train controllers able to reduce drag in a confined cylinder flow through a set of measurements from velocity probes. Their sensor/actuation setups are employed to test the techniques proposed in the current work. Similarly, Li & Zhang (2022) considered RL strategies to suppress vortex shedding utilising stability analyses and unstable equilibrium computation of the underlying system to obtain improved results. Approaches based on RL have also been successfully applied to complex turbulent flows in both experimental (Fan _et al._, 2020) and computational (Guastoni _et al._, 2023) setups, being able, in some cases, to stabilise the flows (Sonoda _et al._, 2023). Comprehensive reviews of RL for flow control can be found in Vignon _et al._ (2023) and Viquerat _et al._ (2022).
Models based on sparse identification of nonlinear dynamics (SINDy) (Brunton _et al._, 2016) have been combined with order reduction techniques, such as POD, as an alternative to black box NN models, being able to find simpler and more intuitive models based on a library of interpretable functions. The approach can be applied for building nonlinear models with control inputs, which can be leveraged for closed-loop control applications (Kaiser _et al._, 2018). On the topic of flow optimization through open-loop control, genetic algorithms can be a candidate tool for parameter search. In such case, studies have been performed to find the best active control setup to improve a fitness function that aims to reduce drag in a bluff body (Zigunov _et al._, 2022), to maximise the thrust vectoring angle in a supersonic jet (Zigunov _et al._, 2023), and to find the best open-loop rotation values in order to reduce drag and symmetrise the wake of the flow past a cluster of cylinders (Raibaudo _et al._, 2020).
Traditional feedback control design approaches often rely on well established control theory, whose literature provides techniques that can ensure optimality, robustness, adaptability or the achievement of specific performance characteristics such as settling (convergence) time. Such linear control methods have been applied for the control of a variety of flow problems (Sipp & Schmid, 2016), such as cavities (Rowley & Williams, 2006; Barbagallo _et al._, 2009), bluff body wakes (Illingworth, 2016; Flinois & Morgans, 2016), and unsteady aerodynamic systems (Brunton _et al._, 2014; Herrmann _et al._, 2022; Sedky _et al._, 2023). However, the broad application of such approaches in fluid mechanics is limited by the fact that they typically rely on linear
approximations of the dynamics near desired equilibrium points. Furthermore, building data-driven linear models of flows is also hindered by the fact that signals sampled from fluid flows can be governed by nonlinear phenomena such as chaos, limit cycles, multiple equilibria, switching, and hysteresis. In the case of unstable flows (more specifically globally unstable flows), datasets containing measurements at (approximately) linear regions near equilibrium can be totally absent. Alternatively, nonlinear control techniques, which include sliding-mode control (Shi _et al._, 2019; Baek _et al._, 2019), feedback linearization (Alyoussef & Kaya, 2019), backstepping control (Zulu & John, 2016), extremum seeking control (Deda & Wolf, 2022) and switched control (Egidio _et al._, 2022), can be applied to the task of controlling nonlinear systems, although stabilization relative to a natural equilibrium point may still require a suitable model with some level of accuracy near equilibrium. Differentiable nonlinear models built from data can be leveraged for model predictive control (MPC), which employs real-time optimization to a cost function within a finite horizon. Although MPC still presents similar limitations near the linear regions for some flows -such as difficulties in stabilizing the system- it has been successfully applied to different flow control problems (Bieker _et al._, 2020; Arbabi _et al._, 2018; Deda _et al._, 2023; Morton _et al._, 2018). For example, near stabilization of the flow past a cluster of cylinders is achieved by Maceda _et al._ (2021).
The determination of optimal or advantageous locations to place sensors to enable full state reconstruction has been explored through a variety of methods, including via gappy proper orthogonal decomposition (Willcox, 2006), a pivoted QR decomposition of an identified set of basis functions (Manohar _et al._, 2018), data-driven dynamical models with Kalman filters (Sashittal & Bodony, 2021; Graff _et al._, 2023), and shallow decoder neural networks (Williams _et al._, 2022). As well as state reconstruction, the problem of sensor placement arises in flow control problems, where the placement of both sensors and actuators are important, and can be optimised and analysed using linear control theory (Chen & Rowley, 2011; Jin _et al._, 2022; Freire _et al._, 2020). The sensor placement methodology employed in the present work differs from these previous methods, and utilises a modified neural network loss function that incorporates the L1 norm of node (and thus physical location) weights. The use of L1-norm-based loss functions has been applied to find sparse solutions in a range of flow analysis problems, such as to find a reduced set of DMD modes to represent flow dynamics (Jovanovic _et al._, 2014), and to find spatially-localised resolvent modes (Skene _et al._, 2022; Lopez-Doriga _et al._, 2023).
In the present work, we present a novel approach that enables nonlinear control design by utilizing neural network architectures to model both the system and control law. To achieve stabilization of complex systems exhibiting nonlinear dynamics, a neural network controller (NNC) is trained in closed loop with a neural network surrogate model (NNSM), which is built from data obtained from a real plant (Deda _et al._, 2023). The methodology is proposed as a candidate solution to overcome the aforementioned lack of measurements near the linear region of unstable nonlinear systems. The training framework also allows for optimal sensor placement by automatic selection from a set of candidate probes. Here, through penalization of model inputs, a sparse configuration is aimed that exclude the less relevant sensors. Furthermore, a byproduct consisting of datasets that contain measurements from approximately linear regions of the state space is leveraged for the data-driven estimation of equilibrium points--which are used as control setpoints--and for conducting a data-driven linear stability analysis. This novelty allows for approaching both problems without the need to modify the solver and enables the construction of linearised flow models involving complex geometries, as well as other nonlinear systems. Moreover, it potentially enables equilibrium estimation and stability analysis through data gathered from experimental applications. The capabilities of the proposed methodology are demonstrated through analysis and control of four different plants, namely the Lorenz system, a modified version of the Kuramoto-Sivashinsky equation, a streamwise-periodic 2D channel flow, and a confined cylinder flow.
The remaining of this work is organized as follows: Section 2 describes the tools and techniques introduced and employed along the work, while Section 3 describes in detail the proposed iterative approach for training the NNs. In Section 4, we describe the different systems/plants to which the techniques are applied. Section 5 then presents the results for each choice of plant. Finally, the benefits and limitations of the approaches presented are discussed in Section 6. Finally, the Appendix A describes the hyperparameters for building and training the neural networks for each case studied.
## 2 Methodology
### Neural network surrogate models
In this work, NNSMs are employed to represent the dynamics of complex systems, such as fluid flows. Given a fixed time step \(\Delta t\), we are interested in discrete systems of the form
\[\mathbf{x}_{k+1}=F(\mathbf{x}_{k},\mathbf{\mu}_{k})\, \tag{1}\]
where \(\mathbf{x}\) is the vector containing the system states, \(\mathbf{\mu}\) is the vector of control inputs and the subscripts represent the discrete time steps such that time \(t=k\Delta t\). The states vector \(\mathbf{x}\) consists of a set of sensor measurements from which the dynamics can be inferred using data-driven methods. This set of of measurements consists of a subset of the true plant full set of states.
From a dataset containing measurements of both \(\mathbf{x}\) and \(\mathbf{u}\) from the plant, for multiple time steps, a NNSM is trained to approximate \(F(\mathbf{x},\mathbf{u})\). A schematic of the neural network, with its inputs and outputs, is presented in figure 1. Here, the symbol \(\tilde{F}\) is used to denote the function represented by the NNSM, and thus a good fit would imply \(\tilde{F}\approx F\). The general approach is similar to that described in Deda _et al._ (2023). The network is composed of fully connected hidden layers which employ a rectified linear unit (ReLU) activation function, whereas a linear function is used for the output layer. The data for training and prediction are normalised so the average for each input and output is 0 and the standard deviation is 1. The loss function used to train this NNSM is discussed in section 2.2. We also point out that the present NNs have a simple complexity, being kept at a maximum of two hidden layers for the cases studied in this work (with further details provided in Appendix A).
### Sensor placement
Another idea explored and applied in the current work is related to the problem of sensor placement. There are several reasons to incorporate sensor placement into the proposed methodology. First, using a smaller number of sensors reduces the total number of training parameters in the NNSM, which in turn reduces the amount of training data required. Second, real-world flow control applications typically use limited real-time measurements, and selecting a small number of sensors that are optimal for control has practical advantages. Lastly, the incorporation of sensor placement into the NN training process allows for optimal sensor locations to be identified without relying on any prior knowledge or intuition of the system dynamics.
The approach proposed here consists of searching for a subset of relevant sensors among an initial set of candidates. To do so, we implement a layer of trainable parameters that are used to weight the NNSM inputs. These weights are penalised using an L1 regularization. The main objective is the production of a sparse layer excluding unnecessary measurements. Hence, after training, the weights below a given threshold (in absolute values) are truncated to 0. For the initialization, each weight value is set to 1.
To avoid these weights getting too low by the cost of growing hidden layer weights, an L2 regularization is applied to the hidden layers. This ensures that the input weights can only decrease if they are irrelevant to estimate the output, which is still the full set of states in a future
time instant. Figure 1 illustrates such configuration. The loss function
\[\mathcal{L}=\frac{1}{n}\mathbf{g}\cdot\mathbf{g}+r_{2}\mathbf{w}_{\text{h}}\cdot\mathbf{w}_{ \text{h}}+r_{1}\sum\lvert\mathbf{w}_{\text{s}}\rvert\, \tag{2}\]
is used for evaluation of the network convergence, where \(\mathbf{g}\) is the array of differences between the labelled data and the NNSM output, \(\mathbf{w}_{\text{h}}\) is the stacked weights array for the hidden layers, \(\mathbf{w}_{\text{s}}\) is the array of weights for the sparsity layer, \(n\) is the total number of candidate sensors, and \(r_{1}\) and \(r_{2}\) are the L1 and L2 regularization factors. In practice, \(r_{1}\) can be adjusted such that larger values tend to provide sparser configurations at the cost of possibly reducing the accuracy of the model. The parameter \(r_{2}\) can be increased to avoid the hidden layer weights to grow indefinitely larger to compensate for the sparsity layer weights becoming small.
In practice, the final model can be viewed as a function of a reduced vector \(\mathbf{x}_{\mathbf{r}}\) containing \(n_{r}\leq n\) values, corresponding to the nonzero components of the sparsity weights, \(\mathbf{w}_{\text{s}}\). Although this is not done in the current work, pruning could also be conducted to reduce the complexity of the neural network by excluding unnecessary connections (Zahn et al., 2022). Regardless of pruning, it is now possible to evaluate the NNSM as a function with the following form
\[\mathbf{x}_{k+1}=\hat{F}_{r}\left(\mathbf{x}_{rk},\mathbf{u}_{k}\right)\, \tag{3}\]
so the full state vector is reconstructed from the reduced set of measurements. Similarly, we can define a function that predicts the components of the state corresponding to identified sensor locations,
\[\mathbf{x}_{rk+1}=\tilde{F}_{rr}\left(\mathbf{x}_{rk},\mathbf{u}_{k}\right). \tag{4}\]
In short, \(\hat{F}_{r}\) and \(\hat{F}_{rr}\) are evaluated with the reduced set of states, but the former outputs the full set of states while the latter outputs the reduced set. Both \(\hat{F}_{r}\) and \(\hat{F}_{rr}\) are used for different goals in the present work, as it will be clear in the following sections.
### Estimation of equilibrium state
To tackle equilibrium estimation, a linearization of the NNSM is obtained through backpropagation. This linearization approach was first introduced by Deda et al. (2023) and, here, an improved technique is proposed. In the first step, the reduced states matrix \(\mathbf{A}_{\mathbf{rr}\left(n_{r}\times n_{r}\right)}\) is obtained
Figure 1: Schematic of the NNSM where the red (blue) nodes are related to weights penalised by the L2 (L1) regularization. A ReLU function is employed in the nodes with a white mark, while a linear function is used for the other nodes.
for the time invariant system of the form
\[\mathbf{d_{rk+1}} =\mathbf{A_{rr}}\mathbf{d_{rk}}+\mathbf{B_{r}}\mathbf{u}_{k}\, \tag{5}\] \[\mathbf{d_{rk}} =\mathbf{x_{rk}}-\mathbf{x_{r}}^{*}\, \tag{6}\]
where \(\mathbf{A_{rr}}(n_{r}\times n_{r})\) and \(\mathbf{B_{r}}(n_{r}\times m)\) are constant matrices obtained from the Jacobian of \(\tilde{F}_{rr}\) evaluated at a given equilibrium operating condition \(\mathbf{x_{r}}^{*}\) and \(\mathbf{u}^{*}\) as follows:
\[\nabla\tilde{F}_{rr}=\begin{bmatrix}\mathbf{A_{rr}}&\mathbf{B_{r}}\end{bmatrix}= \begin{bmatrix}\frac{\partial\tilde{F}_{rr}}{\partial\mathbf{x}}&\frac{\partial \tilde{F}_{rr}}{\partial\mathbf{u}}\end{bmatrix}=\begin{bmatrix}\frac{\partial \tilde{F}_{rr}}{\partial\mathbf{x}}&\frac{\partial\tilde{F}_{rr}}{\partial\mathbf{u} }&\frac{\partial\tilde{F}_{rr}}{\partial\mathbf{u}_{1}}&\dots&\frac{\partial \tilde{F}_{1}}{\partial\mathbf{u}_{m}}\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \frac{\partial\tilde{F}_{rr}}{\partial\mathbf{x}_{1}}&\dots&\frac{\partial \tilde{F}_{rr}}{\partial\mathbf{x}_{nr}}&\frac{\partial\tilde{F}_{rr}}{\partial \mathbf{u}_{1}}&\dots&\frac{\partial\tilde{F}_{nr}}{\partial\mathbf{u}_{m}}\end{bmatrix}. \tag{7}\]
Here, \(\tilde{f}_{i}\) is the \(i\)-th element in the output of \(\tilde{F}\).
In this work, \(\mathbf{u}^{*}=0\) is considered for calculation of a natural equilibrium point of the system, i.e., the equilibrium point related to the uncontrolled system. To compute \(\nabla\tilde{F}_{rr}\), an initial guess \(\mathbf{x_{r}}_{0}^{*}\) is used. The first estimate for the matrix \(\mathbf{A_{rr}}\) is obtained by taking the first \(n_{r}\) columns of \(\nabla\tilde{F}_{rr}\), while the remaining columns compose \(\mathbf{B_{r}}\) (see equation 7). The Newton method is employed to search for an equilibrium point for \(\tilde{F}_{rr}\).
Since \(\mathbf{u}^{*}=0\), the system of the form
\[\mathbf{x_{rk+1}}=\mathbf{A_{rr}}\mathbf{x_{rk}}+\mathbf{b_{r}} \tag{8}\]
is considered, where \(\mathbf{b_{r}}\) is an invariant vector. In this context, \(\mathbf{b_{r}}\) is introduced to add a bias that shifts equilibrium from the origin. For a given equilibrium point \(\mathbf{x_{r}}^{*}\),
\[\mathbf{x_{r}}^{*}=\mathbf{A_{rr}}\mathbf{x_{r}}^{*}+\mathbf{b_{r}}\, \tag{9}\]
which gives
\[\mathbf{x_{r}}^{*}=(I-\mathbf{A_{rr}})^{-1}\mathbf{b_{r}}. \tag{10}\]
The first guess for \(\mathbf{b_{r}}_{0}\) is obtained by evaluating
\[\mathbf{b_{r}}_{0}=\tilde{F}_{rr}(\mathbf{x_{r}}^{*},\mathbf{0})-\mathbf{A_{rr}}\mathbf{x_{r}}^{*} _{0}\, \tag{11}\]
which comes from the linear approximation of \(\tilde{F}_{rr}\). With \(\mathbf{b_{r}}\), \(\mathbf{x_{r}}^{*}\) can be updated by evaluating equation 10, which can subsequently be used to update \(\mathbf{b_{r}}\) via equation 11. The iterative process can be repeated to obtain an estimate of the equilibrium point. If desired, the equilibrium point can be mapped to the full set of sensors by evaluating
\[\mathbf{x}^{*}=\tilde{F}_{r}(\mathbf{x_{r}}^{*},\mathbf{u}_{k}). \tag{12}\]
The complete procedure is synthesised in Algorithm 1. Rather than iterating for a fixed number of steps, one could alternatively employ a stopping criteria based on the size of the update, or estimated error. It is important to mention that this approach is better suitable when the spectrum of \(\mathbf{A_{rr}}\) does not contain eigenvalues at 1+0j (integrator poles). In these cases, there is a continuous space of possible equilibrium points, any of which can be found by applying the Newton method. An alternative method for refining the equilibrium estimate, which is based on dynamic mode decomposition with control (DMDc) and which we observe can give improved results, will be described in Section 2.5.
### Neural network controller
Neural network controllers (NNC) can be employed for the task of controlling complex nonlinear systems (Deda _et al._, 2023). A nonlinear control law of the form
\[\mathbf{u}_{k}=K(\mathbf{x_{rk}}) \tag{13}\]
is implemented as a machine learning model, where the reduced set of states \(\mathbf{x}_{rk}\) are real-time measurements used to evaluate the control signal. A recurrent network is built for the training task as shown in figure 2. The model is trained such that a set of initial conditions is brought closer to the equilibrium point \(\mathbf{x}_{0}^{*}\) along iterations within a fixed discrete horizon \(n_{h}\).
The training data consists of a set of measurements \(\mathbf{X}_{0}=[\mathbf{x}_{r0,1}~{}\mathbf{x}_{r0,2}~{}\dots~{}\mathbf{x}_{r0,p}]\) containing \(p\) samples of the \(n_{r}\)-state system at an initial time \(k=0\). Note that the subscript \(0\) refers to the initial condition in the context of the finite-time closed-loop recurrent network, not being related to the simulation initial condition. The following loss function is targeted for minimization:
\[\mathcal{L} =\frac{1}{p\,n_{h}}\left(\frac{\mathbf{w}_{\mathbf{\theta}}}{n}\cdot\sum_ {j=1}^{p}\sum_{i=1}^{n_{h}}\mathbf{\phi}_{i,j}\circ\mathbf{\phi}_{i,j}+\frac{\mathbf{w}_{ \mathbf{u}}}{m}\cdot\sum_{j=1}^{p}\sum_{i=0}^{n_{h}-1}\mathbf{u}_{i,j}\circ\mathbf{u}_{i,j }\right)\,, \tag{14}\] \[\mathbf{\phi}_{i,j} =\mathbf{\tilde{x}}_{i,j}-\mathbf{x}^{*}\, \tag{15}\]
where \(p\) is the size of the training dataset. Here, the weight vector \(\mathbf{w}_{\mathbf{\theta}}\) has all elements equal to \(1/n_{r}\), which averages the result of the sum using an equal weighting of the states. Similarly, \(\mathbf{w}_{\mathbf{u}}\) has all elements equal to \(w_{\mathbf{u}}\), a scalar hyperparameter used to penalise the control inputs.
The hidden NNC layers are implemented with ReLU activation, except for the output layer that has a sigmoid activation, which is normalised to limit the control effort within the desired range. Once the training process is finished, the NNC block is used independently of the training loop to evaluate the control law in real time through equation 13. In the present work, all of the NNCs are implemented with a single hidden layer containing \(8\) nodes, which means they are computationally inexpensive as required for real-time applications. Appendix A further describes the hyperparameters employed.
### Linear stability analysis
Linear stability theory can be employed to elucidate mechanisms underlying the generation and amplification of flow instabilities. Here, we aim to employ the proposed neural network models to conduct linear stability analyses of unsteady flows, thereby illuminating physical mechanisms associated with unstable equilibrium points. In particular, with such an approach, it will be possible to compute the frequencies and growth rates of the most unstable modes and their associated eigenfunctions.
In an unsteady flow, a Reynolds decomposition allows the splitting of the flow states \(\mathbf{x}(\mathbf{x}_{\mathbf{c}},t)\) in a base flow \(\mathbf{x}^{*}(\mathbf{x}_{\mathbf{c}})\), here obtained at the equilibrium point, plus a time-dependent fluctuation component \(\mathbf{x}^{\prime}(\mathbf{x}_{\mathbf{c}},t)\). In the present notation, \(\mathbf{x}_{\mathbf{c}}\) represents the spatial coordinates. If the fluctuations are sufficiently small, the equations can be linearised about the base flow and the Navier-Stokes equations can then be written in the following linear form
\[\frac{\partial\mathbf{x}^{\prime}}{\partial t}=\mathcal{A}\mathbf{x}^{\prime}\, \tag{16}\]
where the matrix \(\mathbf{\mathcal{A}}=\mathbf{\mathcal{A}}(\mathbf{x}^{*})\) is a linear operator. For a system that is homogeneous in the streamwise (\(x_{c}\)) and spanwise (\(z_{c}\)) directions, the evolution of linear disturbances by equation 16 can be investigated by a direct analysis of the operator \(\mathbf{\mathcal{A}}\) with the transformation
\[\mathbf{x}^{\prime}(x_{c},y_{c},z_{c},t)=\int_{-\infty}^{\infty}\int_{-\infty}^{ \infty}\int_{-\infty}^{\infty}\hat{\mathbf{x}}(\alpha,y_{c},\beta,\omega)e^{ \mathrm{i}(\alpha x_{c}+\beta z_{c}-\omega t)}\;d\alpha\;d\beta\;d\omega\;, \tag{17}\]
where both the streamwise and spanwise wavenumbers \(\alpha,\beta\in\mathbb{R}\), and the frequency \(\omega\in\mathbb{C}\).
In discrete form, and under the transformation of equation 17, equation 16 can be written as
\[-\mathrm{i}\omega\hat{\mathbf{v}}=\mathbf{\mathcal{A}}\hat{\mathbf{v}}\;. \tag{18}\]
In this case, the linear operator becomes \(\mathbf{\mathcal{A}}=\mathbf{\mathcal{A}}(\mathbf{x}^{*},\alpha,\beta)\), and it can be analysed for each separate wavenumber pair \((\alpha,\beta)\) as an eigenvalue problem
\[\mathbf{V}\mathbf{\Lambda}=\mathbf{\mathcal{A}}\mathbf{V}\;. \tag{19}\]
In this equation, the columns \(\hat{\mathbf{v}}_{j}\) of \(\mathbf{V}\) are the eigenvectors of \(\mathbf{\mathcal{A}}\), and the eigenvalues \(\lambda_{j}=-\mathrm{i}\omega_{j}\) are the corresponding diagonal entries of \(\mathbf{\Lambda}\). Here the frequency and growth rate are the real and imaginary parts of \(\omega_{j}\), respectively.
There are several ways in which the identified NNSMs and NNCs can be leveraged to conduct stability analyses of the underlying dynamical systems. Most directly, we could utilise the matrix \(\mathbf{\mathcal{A}}_{\mathbf{tr}}\) arising from the linearization of the NNSM obtained through backpropagation (equation 7). Rather than linearising the global nonlinear model, we can alternatively identify a linear model directly from data taken near the equilibrium point, which is the method used in the present work. This approach amounts to performing a linear regression inspired in DMDc (Proctor _et al._, 2016). The idea comes from the fact that the controlled system can be slightly perturbed, producing a rich set of data near the equilibrium point, where the dynamics of the system tends to be approximately linear.
Given a dataset \(\mathbf{X}=[\mathbf{d}_{0}\dots\mathbf{d}_{p-1}]\) containing \(p\) consecutive full state measurements, a dataset \(\mathbf{U}=[\mathbf{u}_{0}\dots\mathbf{u}_{p-1}]\) containing the control inputs history, and a dataset \(\mathbf{X}^{\prime}=[\mathbf{d}_{1}\dots\mathbf{d}_{p}]\) containing
Figure 2: Schematic of the NNC training where the initial dataset is propagated through recurrent evaluations of the NNSM and NNC. Only the NNC weights and biases are updated during training in order to bring the states \(\mathbf{\chi}_{\mathbf{r}1\leqslant i\leqslant n_{h},j}\) closer to \(\mathbf{\chi}_{\mathbf{r}^{*}}\).
full state measurements with a unit shift, matrices \(\mathbf{A}\) and \(\mathbf{B}\) can be inferred as follows:
\[\mathbf{G} =\mathbf{X}^{\prime}\mathbf{\Omega}^{\dagger}\, \tag{20}\] \[\mathbf{G} \approx\begin{bmatrix}\mathbf{A}&\mathbf{B}\end{bmatrix}\,\] (21) \[\mathbf{\Omega} =\begin{bmatrix}\mathbf{X}\\ \mathbf{U}\end{bmatrix}. \tag{22}\]
As seen, matrix \(\mathbf{\Omega}\) is built from \(\mathbf{X}\) and \(\mathbf{U}\). By calculating its Moore-Penrose inverse \(\mathbf{\Omega}^{\dagger}\), it is possible to compute \(\mathbf{G}\), from which approximations for \(\mathbf{A}\) and \(\mathbf{B}\) can be extracted.
Note that \(\mathbf{X}\) is composed of the deviation of the states from their estimated equilibrium values, as shown in equation 6. Admitting that the estimation of this equilibrium \(\mathbf{X}^{*}\) is not exact, the proposed linear regression can be contaminated. To account for such imperfect estimation, a modification to this method is proposed, where a matrix \(\mathbf{O}=[1\ldots 1]_{(1\times p)}\) containing ones is employed. This allows for the modified procedure:
\[\mathbf{H} =\mathbf{X}^{\prime}\mathbf{\Psi}^{\dagger}\, \tag{23}\] \[\mathbf{H} \approx\begin{bmatrix}\mathbf{A}&\mathbf{B}&\mathbf{c}\end{bmatrix}\,\] (24) \[\mathbf{\Psi} =\begin{bmatrix}\mathbf{X}\\ \mathbf{U}\\ \mathbf{O}\end{bmatrix}. \tag{25}\]
For systems with no integrator poles, this modification allows for the estimation of a system of the form
\[\mathbf{d}_{k+1}=\mathbf{A}\mathbf{d}_{k}+\mathbf{B}\mathbf{u}_{k}+\mathbf{c}\, \tag{26}\]
where \(\mathbf{c}\) is a constant vector that displaces the equilibrium point of the new states \(\mathbf{d}\) from \(0\). Beyond providing a better estimation of \(\mathbf{A}\) to conduct stability analysis, the addition of the constant \(\mathbf{c}\) allows for the correction of the previous estimate of \(\mathbf{X}^{*}\). Although this is not done in the current work, it could also be used, for example, to compensate control loop measurements to improve NNC results. Since the NNC is trained to control the NNSM and, furthermore, \(\mathbf{x}^{*}\) is an equilibrium point for the NNSM, \(\mathbf{c}\) could be used to adapt the NNC loop for the real plant.
To reduce computational costs, two different sets of matrices are used based on the sparse configuration obtained as described in section 2.2:
* The first is obtained by using the reduced states dataset \(\mathbf{X}_{\mathbf{r}}=[\mathbf{d}_{\mathbf{r}0}\ldots\mathbf{d}_{\mathbf{r}p-1}]\). Replacing \(\mathbf{X}\) by \(\mathbf{X}_{\mathbf{r}}\), the algorithm produces a matrix \(\mathbf{A}_{\mathbf{r}(n\times n_{r})}\) which maps the reduced set of sensors to the full set at the next timestep, as well as \(\mathbf{B}\) and \(\mathbf{c}\).
* The second set of matrices is obtained by also using \(\mathbf{X}_{\mathbf{r}}\) with the further substitution of \(\mathbf{X}^{\prime}\) by a reduced matrix \(\mathbf{X}_{\mathbf{r}^{\prime}}\). This produces the same matrices \(\mathbf{A}_{\mathbf{r}(n_{r}\times n_{r})}\) and \(\mathbf{B}_{\mathbf{r}(n_{r}\times m)}\) shown in section 2.3--with potentially better accuracy--as well as the reduced bias vector \(\mathbf{c}_{\mathbf{r}}\).
A stability analysis can be conducted by finding the eigenvectors \(\mathbf{V}_{\mathbf{r}i}\) and eigenvalues \(\lambda_{i}\) from the square matrix \(\mathbf{A}_{\mathbf{r}\mathbf{r}}\), where \(i=1,2,\ldots,n_{r}\). Since each vector \(\mathbf{V}_{\mathbf{r}i}\) is an eigenvector of \(\mathbf{A}_{\mathbf{r}\mathbf{r}}\), each \(\mathbf{A}_{\mathbf{r}\mathbf{r}}\mathbf{V}_{\mathbf{r}i}\) is also an eigenvector of \(\mathbf{A}_{\mathbf{r}\mathbf{r}}\), which allows the mapping
\[\mathbf{V}_{\mathbf{r}(n\times 1)}=\mathbf{A}_{\mathbf{r}(n\times n_{r})}\,\mathbf{V}_{\mathbf{r}i(n_{ r}\times 1)} \tag{27}\]
to estimate the eigenvectors across all sensor states. This mapping is employed in this work for visualization of modes projected over the full set of candidate sensors.
## 3 Iterative training
A key aspect of the proposed methodology is the development of a training methodology that ensures that both the NNSM and NNC are trained with sufficient quantities of data, particularly
in the regions of state space where model accuracy is important for control effectiveness. To train and improve the neural network models employed in this work, an iterative training algorithm is proposed. Four different modes of data sampling are considered for training the NNSM and the NNC. Each mode produces data that are compiled for training both networks. They are:
* _Uncontrolled sweep_ mode: the system is perturbed by an open-loop control signal with no closed-loop control;
* _Controlled sweep_ mode: the system is controlled in closed loop at the same time that it is perturbed by an open-loop signal;
* _Release_ mode: the plant runs with no control and no open-loop perturbation, allowing for better visualization of the next mode and to test whether the system can be successfully controlled from an initially uncontrolled state;
* _Control_ mode: the system is controlled in closed loop with no open-loop perturbation.
All open-loop perturbations applied in this work are random staircase signals with constant step length.
The _uncontrolled sweep_ is the first mode employed so as to produce data with control input history. A number of samples is gathered from the perturbed plant. These first samples are used to compute the means and standard deviations used for data normalization of the neural networks inputs and outputs. This data is also used for training the NNSM, estimating its equilibrium point \(\boldsymbol{x_{r}}^{*}\) and training the NNC, all for the first time. The NNSM sparsity layer has its negligible weights truncated according to a threshold (further described in Appendix A), which accounts for identifying and updating the relevant sensor locations.
The second mode to be run is the _release_ mode, which allows the system to achieve its natural (uncontrolled) behaviour, i.e., a limit cycle or a chaotic attractor, for example. The _control_ mode is applied after the system completely or partially returns to its natural operation. When the NNC is turned on, the dynamics of the controlled system drives the plant states to work under a new behaviour, which can tend to a new limit cycle, a new chaotic attractor or even to an equilibrium point - which can either be or not be desirably close to the estimated equilibrium, \(\boldsymbol{\chi}^{*}\).
Data is gathered in each of these three steps, building a database that grows along the iterative process. After they are run, the process is repeated for a number of iterations, but instead of doing an _uncontrolled sweep_, the _controlled sweep_ mode is applied in every iteration beyond the first one. The idea is to leverage the fact that the controlled system should have a tendency to get closer to the equilibrium point, and by perturbing the controlled system appropriately, a larger set of data is obtained around this equilibrium. When a new iteration begins, the amplitudes of the open-loop perturbations are reduced by a constant factor \(0<\alpha\leq 1\), so that data is gathered closer to the actual plant equilibrium. At each iteration, the NNSM and the NNC are retrained with new data collected from the full database and the equilibrium point is updated using the retrained NNSM. Algorithm 2 synthesises the steps described, and a schematic showing the phase portrait for the Lorenz system is also presented in figure 3. Each iteration in this figure depicts 1000 measurements sampled during the _sweep_ modes (_uncontrolled_ for iteration 1, _controlled_ for subsequent iterations). As the iterations advance, the closed-loop control tends to bring states closer to the origin, which is an equilibrium point for the Lorenz system. This enhances the estimate of the equilibrium point, and it can be seen in the smaller limits of the axes, as shown for the higher iterations. Also, beginning in iteration 7, since the perturbations are reduced between iterations, they become unable to shoot the controlled states out of the main stable orbit around the now-stabilised equilibrium.
```
1:for\(i=1\) to \(n\)do
2:if i==1 then
3: Run uncontrolled sweep mode to gather data
4: Compute parameters for normalization
5:else
6: Reduce the amplitude of open-loop perturbations by a factor \(\alpha\)
7: Run controlled sweep mode to gather data around the estimated equilibrium
8:endif
9: Update the database with the new data
10: Train NNSM with the updated database
11: Truncate negligible weights in the sparsity layer
12: Estimate equilibrium from NNSM
13: Train NNC with the updated database
14: Run release mode to allow the system to reach its natural behaviour
15: Gather data in release mode
16: Run control mode to drive the system to a new behaviour and test control effectiveness
17: Gather data in control mode
18:endfor
```
**Algorithm 2** Data Sampling Algorithm
Figure 3: Phase portraits of the Lorenz system for data sampled during the _sweep_ modes at different iterations. Note that for iterations 7 to 9 the axes limits are considerably smaller. The red dot indicates the origin, which is an equilibrium point for the Lorenz system.
For the stability analysis, only the data produced by the last _sweep_ mode is used since it may contain more accurate data near the equilibrium point, hopefully ruled by approximately linear dynamics. One of the advantages of using such type of data is enabling DMD-based techniques to identify structures corresponding to eigenvectors of the equillibirum-linearised system, even for systems exhibiting strong nonlinearities.
## 4 Study cases
This section outlines the four systems that will be used to test and validate the proposed methodology.
### Lorenz equations
The first study case for this work is the Lorenz system, whose dynamics are described by the following set of nonlinear ordinary differential equations
\[\dot{x} = \sigma(y-x)\, \tag{1}\] \[\dot{y} = \rho x-y-xz+u\,\] (2) \[\dot{z} = -\beta z+xy\, \tag{3}\]
where \(\sigma=10\), \(\beta=8/3\) and \(\rho=28\) are chosen for the present analysis. With this set of parameters, the uncontrolled system (\(u(t)\)=0) behaves as a chaotic attractor. The goal is to apply the techniques proposed in this work to find an equilibrium point; to stabilise the system, bringing the states towards this point; and to compute a linear model to obtain the system poles. In this problem we do not conduct sensor placement.
### Modified Kuramoto-Sivashinsky equation
The Kuramoto-Sivashinsky (KS) equation is given by the following nonlinear partial differential equation
\[\frac{\partial v}{\partial t}+v\frac{\partial v}{\partial x_{c}}=-\frac{1}{R} \left(P\frac{\partial^{2}v}{\partial x_{c}^{2}}+\frac{\partial^{4}v}{ \partial x_{c}^{4}}\right)\, \tag{4}\]
where \(R\) is equivalent to the Reynolds number in a fluid flow, and \(P\) represents a balance between energy production and dissipation. This system has been used to emulate, for example, combustion instabilities in flame fronts (Sivashinsky 1977; Kuramoto 1978) and hydrodynamic instabilities in shear flows (Fabbiane _et al._ 2014). In this context, \(x_{c}\) is the spatial coordinate and \(v(x_{c})\) is the variable of interest.
It is clear that, for any constant \(c\), \(v(x_{c})=c\) corresponds to an equilibrium point, i.e., \(\partial v/\partial x_{c}=0\). Since we are interested--among other goals--in computing an equilibrium point for this system, we propose a modified version of the KS equation given by
\[\frac{\partial v}{\partial t}+v\frac{\partial v}{\partial x_{c}}=-\frac{1}{R} \left(P\frac{\partial^{2}v}{\partial x_{c}^{2}}+\frac{\partial^{4}v}{ \partial x_{c}^{4}}\right)-\frac{Q}{L}\int_{0}^{L}\left(v(x_{c})-V\right)dx_{c }+\sum_{i=0}^{m}B_{i}(x_{c})u_{i}. \tag{5}\]
The new term \(-Q/L\int_{0}^{L}(v(x_{c})-V)\ dx_{c}\) corresponds to a spatial average of the difference between \(v(x_{c})\) and a chosen equilibrium \(V\). We make use of this modification to ensure that the equilibrium point \(v(x_{c})=V\) is unique for \(u_{i}(t)=0\). A globally unstable configuration is set by choosing \(R=0.25\), \(P=0.05\), \(Q=0.0005\), \(V=0.2\) and \(L=60\) with periodic boundary conditions. The control term \(\sum_{i=0}^{m}B_{i}(x_{c})u_{i}\) comes similarly to the approach proposed by Fabbiane _et al._ (2014), where \(m\) is the number of control inputs, \(B_{i}(x_{c})\) is a window function for each actuator and \(u_{i}\) represents the control signals.
The equation is discretised in the present simulation with \(\Delta x_{c}=1\) and \(\Delta t=0.025\). Sampling
for control is made each 400 timesteps, thus giving a control timestep \(\Delta t_{c}=10\). Three actuators are employed by using Gaussian window functions centered at \(x_{c}=10\), \(x_{c}=30\) and \(x_{c}=50\). Sampling of \(v\) is conducted at each of the 60 discrete grid points. Figure 4 shows a sensor/actuation schematic. In this test case, we conduct equilibrium estimation, sensor placement, control and stability analysis.
### Periodic 2D channel flow
Another problem we consider is streamwise-periodic channel flow. We employ numerical simulations using an open source Nek5000 CFD code previously applied for RL control (Li & Zhang, 2022) to solve the incompressible Navier-Stokes equations for a Reynolds number \(\mathrm{Re}=8000\), based on the half channel height \(h\) and the velocity at the channel center line \(U\). The channel length is set to \(L=36h\) so that two unstable modes appear, at streamwise wavelengths L/5 and L/6. A total of \(m=8\) control inputs \(u_{1},\ldots,u_{8}\) modulate the velocities of 8 pairs of actuators. Each pair works in opposition to ensure a zero-net mass flux, being weighted by a parabolic window function as depicted in figure 5.
The distribution of candidate sensors is presented in figure 6. The chosen locations are concentrated near the wall to capture velocity fluctuations from hydrodynamic instabilities. A vertical line of 50 sensors is also employed for comparison with stability analysis computed from the solution of the Orr-Sommerfeld equation. In total, 332 candidate sensor positions are chosen, where horizontal and vertical velocities \(u(x_{c},y_{c})\) and \(v(x_{c},y_{c})\) are sampled, totalling 664 signals. For this channel problem, we are interested in equilibrium estimation, sensor placement, control, and stability analysis.
### Confined cylinder flow
Finally, the study of a 2D confined flow past a cylinder is presented. The initial configuration of sensors and actuators is the same as that proposed by Rabault _et al._ (2019). For this task, a Nek5000 setup (Li & Zhang, 2022) is employed considering a Reynolds number \(\mathrm{Re}=150\) based on the cylinder diameter \(D\) and maximum velocity \(U\) of the inflow profile. The initial configuration of sensors is equivalent to the one presented by Rabault _et al._ (2019), depicted in
Figure 4: Actuation and sensor setup for controlling the modified KS equation. The coloured dots represent the control input profiles \(B_{i}\) at each of the 60 positions measured initially (candidate sensors).
Figure 5: Actuation setup for controlling the 2D channel flow. Each pair of actuators works in opposition, ensuring zero-net mass flux.
figure 7, where 153 locations are chosen, totalling 306 velocity measurements for \(u\) and \(v\). No-slip wall boundary conditions are applied at the top and bottom limits of the domain.
The actuation scheme is presented in figure 8. A single control input modulates minijets in opposition, thus providing zero-net mass flux. For this problem, we are interested in finding an equilibrium point (only to estimate a control setpoint), sensor placement and flow control.
## 5 Results
### Lorenz equations
To assess the performance of the proposed techniques, results for the low order Lorenz system are presented. Starting with the problem of estimating an equilibrium point for this system, figure 9 presents the values of each state at each iteration. Two plots compare this computation using both the backpropagation-based linearization of the neural network (as described in section 2.3) and the alternative version employing the modified DMDc method proposed in section 2.5. In both cases, a convergence towards equilibrium at \(x=y=z=0\) is seen. It is clear that, for the initial iterations, better approximations are obtained using backpropagation (figure 9(a)). The fact that DMD based techniques rely on data sampled from approximately linear systems compromise their performance when the plant is strongly nonlinear. As discussed by Deda _et al._ (2023),
Figure 8: Actuation scheme applied to the cylinder flow. Blowing/suction devices in opposition are modulated by a single control input.
Figure 6: Initial candidate sensor positions for the channel flow. The dimensions are distorted for better visualization.
Figure 7: Initial candidate sensor positions for the confined cylinder flow. The box represents the spatial domain used in the simulation.
the linearization of a nonlinear model at the point of interest can provide better results when nonlinearities cannot be neglected. On the other hand, for the later iterations, the DMD based approach is able to provide more accurate results, since the closed-loop control is able to maintain states near equilibrium, i.e., operating almost as a linear system (figure 9(b)).
The stabilization of the controlled Lorenz system is depicted in figure 10, where signals are presented against time for iterations 1 and 4. For the former case, depicted in figure 10(a), it is still not possible to obtain a stabilizing controller. This might be due to poor estimation of the dynamics near the equilibrium point obtained for such iteration. On the other hand, at the iteration 4 shown in figure 10(b), the trained NNC is able to fully stabilise the plant, bringing the states close to \(x=y=z=0\). As seen in figure 9(a), the equilibrium point for iteration 4 is estimated at \(x\approx 0.14\), \(y\approx-0.01\) and \(z\approx 0.53\), which is close enough to allow for the NNC to stabilise the plant (though subsequent iterations have much more accurate estimates).
Finally, a modal stability analysis is conducted, where poles are computed for the controlled Lorenz system. The modified DMDc approach is employed to data obtained from a single iteration of the _sweep_ mode. Similarly to what is observed in figure 9(b), later iterations tend to provide better approximations of equilibria. Figure 11 presents results for iterations 4 and 9. For comparison, the Lorenz system equations are linearised analytically, and the obtained poles \(s_{i}\) are discretised so that \(z_{i}=e^{s_{i}\Delta t}\). These poles are presented as "ground truth" in figure 11. As also shown in figure 3, at iteration 4, the perturbed controlled system is driven to nonlinear orbits. At iteration 9, sampled states are concentrated much closer to equilibrium, thus providing better approximations through linear regression.
### Modified Kuramoto-Sivashinsky equation
For the modified KS equation, two different training approaches are presented. They only differ in terms of application of the sparsity layer. One of the cases does not include the sparsity layer on the neural network, (i.e., the full set of sensors is employed), whereas the other case includes a sparsity layer to reduce the number of sensors required. In this second case, the total number of
Figure 9: Equilibrium point estimation along iterations. The estimation through backpropagation is shown in (a), where more accurate estimates are made at first iterations. The fixed estimation through the DMDc variant is shown in (b), where better approximations are found in the last iterations. In this case, the red dots are hidden behind the blue ones.
sensors used to infer a model for the system dynamics, as well as to conduct closed-loop control, is reduced from 60 to 9, as represented in figure 12. In both cases, the trained process is conducted along 10 iterations of _sweep_, _release_ and _control_. We observe in figure 12 that the sparse sensors are selected to be approximately evenly distributed throughout the domain, with three sensors located across each of the three Gaussian functions defining the actuators, and with one sensor at or slightly downstream of each of the actuator peaks.
Figure 13 shows the calculation of the equilibrium points along \(x_{c}\) found in each case using Newton's method and the modified DMDc. It is possible to observe in figure 13 (a) that the modified DMDc is able to enhance the accuracy of the equilibrium point when the full set of sensors is employed. However, as a trade-off for reducing the number of sensors, figure 13 (b) shows that the estimate of the equilibrium is worsened with sparse sensors. Despite this, the
Figure 11: Poles of the discrete-time Lorenz system obtained by linearization of the controlled system compared to ground truth positions. (a) At iteration 4, the approximation is still not satisfactory, which is explained by the nonlinear dynamics present in the data employed. (b) At iteration 9, the estimated poles are found considerably closer to the ground truth solution.
Figure 10: _Sweep_, _release_ and _control_ modes (_s.m._, _r.m._ and _c.m._, respectively) for iterations (a) 1 and (b) 4. The trained NNC is unable to stabilise the system in iteration 1, in contrast to the trained control system in iteration 4, where stabilization is achieved. This can be seen by comparing both control modes (_c.m._).
trained NNC is still able to stabilise the system, as exemplified in figure 14, where the waves are controlled until convergence.
For the modal stability analysis of the modified KS equation, the computed poles are shown in figure 15. The ground truth poles are obtained by linearizing equation 4.5 with a discretization performed using fourth-order centered finite difference schemes for all spatial derivatives. The poles are found by computing the eigenvalues \(s_{i}\) of the matrix built from the approximation (with periodic boundary conditions) and transformed to their discrete version \(z_{i}=e^{s_{i}\Delta t}\). The map obtained with the proposed technique is fairly accurate, with good preservation of frequency and growth rates of the eigenvalues closest to the unit circle. Since the sparse case comprises only 9 sensors, only 9 eigenvalues are found.
### Periodic 2D channel flow
The 2D channel flow case is studied with dynamic sensor placement. The final distribution of probes after 19 iterations of training (with _sweep_, _release_, and _control_ modes) is presented in figure 16. It is possible to notice the preference for horizontal velocity probes (\(u\)) rather than vertical (\(v\)) ones. The sensor placement is able to achieve a reduction from 664 to 181 signals, 148 and 33 correspond to \(u\) and \(v\) velocity probes, respectively. The spatial distribution of these chosen sensors shows that almost all of the near-wall \(u\) sensors are included, possibly highlighting the
Figure 12: Sparse configuration obtained in training for the case with sensor placement, where 9 sensors are kept for the KS equation.
Figure 13: Estimated equilibrium points of the KS equation found with (a) full and (b) sparse set of sensors. The true equilibrium is \(v(x_{c})=0.2\).
importance of near-wall information for estimating and controlling (with wall-based actuation) the system dynamics.
With feedback of the signals probed at the locations obtained through training, the NNC is able to completely stabilise the flow. A comparison of the uncontrolled and controlled flows is presented in figure 17, where \(v\)-velocity contours are shown. As expected, the actuation and the flow field do not converge to the exact equilibrium, since its estimate is not perfect, especially considering that, for control, we employ the approximation obtained through Newton's method (see section 2.5). The controlled flow is presented in two ways: 1) using the same contour range of the uncontrolled flow, and 2) with a more saturated contour range that allows one to notice some
Figure 14: Space-time map showing the growth (decay) of disturbances in the KS equation before (after) turning the controller for the (a) full and (b) sparse set of sensors. The black vertical line highlights the instant when the control is turned on.
Figure 15: Poles of the discrete-time modified KS system obtained by linearization of the controlled system compared to the ground truth solution. The (a) full and (b) sparse sensor approaches are compared. Most of the ground truth poles are concentrated close to the origin, making them harder to visualise.
small residual velocity fluctuations. When employing the same scale as that of the uncontrolled case, the bias becomes nearly imperceptible. For the uncontrolled case, the snapshot represents a frame captured from an unsteady flow, where the flow structures are transported from left to right, while the controlled case is simply a steady state.
The present flow includes two unstable modes. These are computed through the eigendecomposition of the associated Orr-Sommerfeld operator, which is discretised using a pseudo-spectral method with Chebyshev polynomials utilizing the toolbox by Weideman & Reddy (2000). The wavelengths of the unstable modes obtained by the Orr-Sommerfeld equation correspond to 1/6 (mode 1) and 1/5 (mode 2) of the channel length. This is in agreement with the wavelengths of the dominant structures observed in simulations of the uncontrolled nonlinear system, as shown in figure 17(a).
By applying the modified DMDc approach to the data obtained near equilibrium, two unstable pairs of modes are found, corresponding to the two unstable modes of the true linearised system. For this case, the discrete poles \(z_{i}\) found with the modified DMDc are converted to a continuous version \(s_{i}=\ln{(z_{i})}/\Delta t\), so that one can directly infer the corresponding growth rates and frequencies. A comparison between the true and estimated unstable eigenvalues is shown in Table 1, where close agreement is observed. In these results, the imaginary part corresponds to the oscillation frequency while the real component provides the growth rate. The higher relative error found for the growth rate when compared to the frequency can be explained by the transient time scales being too slow compared to the discretization time \(\Delta t\). Since \(\Delta t\) needs to be small enough to capture the signals without significant aliasing, and the timescale of the oscillations are considerably faster than the growth rate, the discrete time horizon to capture the long-term growth becomes too large. The present approach is also able to provide the associated eigenvectors. In figure 18, the absolute values of the \(u\) and \(v\)-velocity eigenvectors related to
Figure 16: Probe locations employed with sparse sensing, where the number of probes is reduced from a total of 664 to 181, from which 148 are used for the horizontal velocity component, while 33 are used for the vertical one. The grey dots are deactivated by the sparsity layer.
Figure 17: Comparison of uncontrolled and controlled flows through visualization of \(v\)-velocity contours (see Movie 1). For the controlled case, a more saturated contour range is also shown in the bottom plot, and one can see that the actuation does not converge to zero due to an imperfect estimation of the equilibrium.
each pair of eigenvalues are plotted. Here, these eigenvectors are mapped to the vertical line of sensors for comparing with the ground truth values. Despite displaying a slight asymmetry for the proposed method, which could likely be improved by adding more sensor points, the comparisons are in good agreement.
### Confined cylinder flow
As was the case in section 5.3, this system is modelled and controlled while also incorporating a sparsity layer in the NNSM to select sensors. In doing so, the NNSM training process for the confined cylinder flow selects a total of 47 sensors from the candidate set of 306 sensors: 25 and 18 points for \(u\) and \(v\) velocity components, respectively. While the number of sensors is significantly reduced, as presented in figure 19, the NNC approach with iterative training is able to successfully stabilise the this flow. To provide an overview of the training process, the lift coefficient over the cylinder is presented in figure 20 for the first 3 iterations. In the first iteration, the NNSM and the NNC are trained using only open-loop data. For this iteration, the lift fluctuations are only slightly reduced when flow control is turned on. For the following iterations, the models are trained using both open and closed-loop data, which improves the results, as more data is available near the equilibrium point, where the system is approximately linear.
While the stabilization is not completely achieved by the end of the second iteration, a reduction in the main oscillations can be observed. By the third iteration, complete stabilization is achieved through control.
Important features observed in the controlled flow are shown in figure 21, which shows the convergence history of the drag and lift coefficients, as well as of the control input, during the control stage of the third iteration. The present flow control approach is able to suppress both the drag and lift coefficient fluctuations. At the same time, the mean drag is also reduced. The complete
\begin{table}
\begin{tabular}{c c c} \hline \hline & Mode 1 & Mode 2 \\ Ground truth & 1.8601e-3 \(\pm\)/2.6403e-1 & 6.8185e-4 \(\pm\)/2.0206e-1 \\ Modified DMDc & 2.0545e-3 \(\pm\)/2.6396e-1 & 7.4956e-4 \(\pm\)/2.0214e-1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Eigenvalues for unstable modes 1 and 2 for 2D channel flow. Colours correspond to modes plotted in figure 18.
Figure 18: Unstable modes for the 2D channel flow. Ground truth and estimated results are compared in terms of absolute values for the eigenvectors mapped to the vertical line of sensors.
stabilization also ensures that only a minimum effort is required to keep the system operating with small oscillations and drag losses. Residual efforts are due to imperfect estimation of equilibrium, and may also be required for compensating eventual perturbations. Figure 22 shows \(u-\)velocity contours for the uncontrolled and controlled cases. In the former, vortex shedding develops along the wake, inside the plane channel. On the other hand, a steady solution is obtained for the controlled case.
To contextualize these observations, we now discuss prior control results for this confined cylinder configuration. The most direct comparison can be made with Li and Zhang (2022), who also attempted to stabilize the flow. They proposed an approach for the stabilization problem using reinforcement learning to train neural networks capable of minimizing the vortex shedding energy, which is translated to a reward function that takes the velocity fluctuations at selected probe locations. Such calculations could be done by either subtracting the mean flow or the equilibrium flow found through selective frequency damping (SFD) from the measured velocity probes. By using the former approach, convergence was not observed, whereas using the equilibrium states from SFD enabled stabilization. The results, however, required manual positioning of probes through a heuristic approach informed by considering the wavemaker region obtained from considering the region of overlap between the leading direct and adjoint eigenmodes. Furthermore, the prior computation of the equilibrium through SFD is a dedicated step that requires additional
Figure 19: Demonstration of the application of the sparsity layer for the confined cylinder flow. The total number of measurements is reduced from 306 to 47 (25 and 18 probes for the horizontal and vertical velocity components, respectively). The grey dots are deactivated by the present L1 regularization.
Figure 20: Evolution of lift coefficient for 3 iterations of the _sweep_ (blue), _release_ (yellow), and _control_ (red) stages for the confined cylinder flow.
simulation-intrusive computations. By contrast, our approach enables automatic equilibrium (and eigenmode) computations, which do not require the use of modified numerical solvers. Li & Zhang (2022) found that successful control of the confined cylinder flow resulted in a small, nonzero mean control signal (i.e., control jet flowrate), which we also observe here. In our case, this is on the order of \(\tilde{u}\) = -6e-4, which is too small to observe on the right subplot of figure 21.
It is more difficult to make a direct comparison with other prior control studies of this system, primarily due to the different control objectives considered. The current work aims for stabilization, whereas other prior investigations considering this confined cylinder flow (Rabault _et al._, 2019; Varela _et al._, 2022) focused on drag minimization. As we holistically approach the control problem proposing methods that also provide estimates of the equilibrium, the stabilization task is directly enabled through NNC training. For this confined cylinder flow, a drag reduction is seen through stabilization, the lift and drag oscillations are suppressed, and the control input required to keep the controlled system at the unstable equilibrium approaches zero. While we find a drag reduction of 2.6% for the fully-stabilized flow, larger drag reduction--5.7% at \(\mathrm{Re}=100\) and 21.6% at \(\mathrm{Re}=200\)--are reported in Varela _et al._ (2022). This suggests that the optimal solution for drag reduction is not the flow at its equilibrium. This can be verified, for example, in the strong overshoot seen in \(C_{D}\) (Fig. 21), where the maximum drag reduction is reached before the flow is stabilized. In fact, in iteration 1, the control system produces an average drag reduction of 3.4%, lower than the stabilized flow at iteration 3 of 2.6%.
Figure 21: History of drag and lift coefficients, and control effort for the third iteration in the control mode. Gray lines show uncontrolled solution.
Figure 22: Comparison of uncontrolled and controlled \(u\)-velocity fields (see Movie 2). The former is represented by a snapshot taken at the flow natural limit cycle. For the controlled case, the snapshot is captured after stabilization is achieved.
## 6 Conclusions
In the present work, neural networks are applied to perform as surrogate models of nonlinear dynamical systems, and also as controllers for stabilizing such systems. The proposed approach is first tested with the Lorenz set of ordinary differential equations, as well as with a modified version of the Kuramoto-Sivashinsky partial differential equation. Then, flow control is demonstrated for a periodic 2D channel flow, for which a data-driven linear stability analysis is also conducted and, finally, for a more complex case of a confined cylinder flow. The trained neural networks provide adequate surrogate models that achieve all the present results with a single hidden layer, except for the confined cylinder case, which employs two layers. In the same fashion, the neural networks obtained to control the investigated systems are also single-layer models with only a few neurons. Despite the simple individual neural network architectures, through a recurrent training strategy, the present framework is able to obtain effective controllers. Results also support the efficacy of neural networks for learning complex system dynamics, as well as to obtain important flow features, such as equilibrium points, and leading eigenmodes of the linearised system about these fixed points. Although these data-driven models are trained as black boxes, relevant information can be extracted by using backpropagation.
In this work, several different nonlinear unstable systems are investigated, and their dynamics do not naturally converge to equilibrium points. This behaviour hinders data sampling that represents the plant dynamics close to such points, where linear approximations should be able to inform the main eigenvalues and linear modes. As a solution for this problem, we propose the application of NNC with an iterative training of the models. As well as achieving stablization, this new method proves to be an efficient approach for equilibrium computation, sensor placement, and modal stability analysis. As demonstrated by results, the iterative approach is key for bringing the system closer to a progressively better estimation of the equilibrium point. In all the cases presented, the first iteration is unable to provide accurate results, specially for equilibrium computation and modal analysis. Even for the control task, stabilization is not achieved, for example, in the cylinder flow case. The improvement achieved with the iterative process puts the presented NN approach as an efficient and accurate candidate for flow stabilization and data-driven flow analysis, with the possibility of enabling limited sensor allocation.
The first plant studied with the proposed methodology is the Lorenz system. With the chosen numerical parameters, its dynamics results in a chaotic attractor featuring 3 different equilibrium points. While not explicitly shown, it was found that the initial guess for the equilibrium point has an important role in the iterative process, which means in general that if initial guesses are too far from an equilibrium of interest, the algorithm may converge to another point, which might not be desirable. In the case presented in the results section, the initial choice at \(x=y=z=0.1\) converges near the origin after a few iterations. Results from this first test indicate that the iterative training of the NNSM and NNC successfully overcomes the lack of data sampled near the equilibrium. As the control strategy improves and the perturbations are reduced in amplitude, datasets containing more measurements near equilibrium are built, improving the quality of linear approximations and the control design aiming stabilization. Linear regression through DMD-based algorithms is also enabled for a considerably broader range of plants, taking advantage of data produced by controlled nonlinear systems that are brought closer to linear operation through feedback control. The modification to the DMDc method used in this work also helps extend the analyses to systems whose equilibrium points are unknown or poorly estimated, allowing for a model fitting that also enhances estimates of equilibrium bias.
The present methodology is next applied to control a more complex problem, a modified version of the Kuramoto-Sivashinsky equation. This modified equation is proposed in order to force a single equilibrium point instead of a continuous space of possibilities. This test case allows the study of the effectiveness of sensor placement through L1 regularization of the inputs. Again,
equilibrium estimation and feedback control obtain satisfactory results, even with a reduction in the number of sensors from 60 to 9. Although this reduction slightly worsens the estimation of equilibrium and, therefore, adds further errors to control convergence, tuning the L1 regularization through the choice of a single hyperparameter leads to a configuration with an intermediate number of sensors at reduced accuracy cost. Even so, the application of Newton's method with linearization of the NNSM through backpropagation is able to provide an equilibrium point estimation that allows for stabilization and, therefore, to obtain the approximately linear dataset subsequently used to identify the dominant eigenmodes of the linearized system.
Application of the NNC is then applied to the Navier-Stokes equations. As a first case, stabilization of a 2D streamwise-periodic channel flow is sought, where the capability of conducting a linear stability analysis is also explored for a flow configuration with two unstable modes. The L1 regularization is applied in this case and, from a total of 664 measured signals, training is able to reduce sensing to 181 probes. Stabilization is achieved with 8 pairs of actuators in opposition through NNC, which enables sampling of data close to equilibrium, i.e., of signals described by approximately linear dynamics. The data is used for performing a linear regression through a modified DMDc. The unstable eigenvalues and eigenmodes computed by this technique show good agreement with those obtained directly from the solution of the linear Orr-Sommerfeld operator. The different magnitudes of the growth rate and frequency of the unstable modes result in different levels of accuracy for their estimation, with the growth rates being close to 0. In this sense, the frequencies present lower relative errors than the growth rates due to the large discrete time intervals associated with the latter for the chosen time step. The natural growth rate corresponds to a very small component of the measured signal compared to the effect of the uncontrolled frequency.
Finally, control of a confined cylinder flow is also presented. The setup chosen with actuators in opposition is the same proposed in different studies found in literature (Rabault _et al._, 2019; Li & Zhang, 2022). Here, we assess the ability of the neural network model to stabilise a complex system through the iterative training framework. Even for this more complex case, the NNC is built from a single layer containing only 8 nodes. Results show that flow control suppresses the lift and drag oscillations from vortex shedding and also reduces the mean drag by bringing the flow to its equilibrium point. Not only is the system successfully controlled, but additionally the low complexity of the NNC suggests that there is considerable room for applying the technique to considerably more complex fluid systems without prohibitive costs. For this confined cylinder configuration, prior work (Li & Zhang, 2022) found that RL control performance could be improved through equilibrium computation and linear stability analysis in the vicinity of this equilibrium. Here, we achieve similarly successful control results without needing to perform these auxiliary computations explicitly, but where the results of such computations are available as a byproduct of our modeling and control methodology.
In each of the examples featuring partial differential equations across a spatial domain, we have utilized a sparsity layer in the NNSM to reduce the number of sensor measurements (and thus model inputs). While the resulting sensor locations are chosen to optimise a given cost function, they are not necessarily unique solutions, in the sense that alternative sensor locations may be obtained for different realizations (with different random input signals and randomly initialized NN weights) of the training procedures. This is clear from observing that the identified sensors do not follow the same symmetry properties as the geometries of the problem. That being said, the chosen sensor locations do reveal some aspects of the underlying physics. For the modified KS equation, the sensors are distributed throughout the domain, in accordance to the translation invariance of the problem (aside from the actuator locations). For the 2D channel flow, the sensor locations indicate the importance of near-wall streamwise velocity measurements, with a sparser set of measurements away from the wall. The asymmetry in the chosen locations of the wall-normal velocity probe locations is perhaps due to the fact that the instabilities in this system
consist of mode shapes where the vertical velocity component extends across the whole domain, so taking measurements at both the upper and lower walls may be unnecessary. In the confined cylinder flow, we find that sensors both close to the cylinder, and downstream in the wake are chosen, indicating the importance of measurements in both regions for suppressing the natural vortex shedding behavior. While beyond the scope of the present work, it could be interesting to compare the identified sensor locations with those obtained from alternative methods (Manohar et al., 2018; Sashittal and Bodony, 2021; Williams et al., 2022; Graff et al., 2023).
The modeling and control framework utilised here is similar in principle to that used in classical linear control theory, with separate models for the plant and controller connected in feedback. While nonlinear control is substantially more complex, here we show that relatively simple neural networks can be used to develop effective nonlinear models for both the plant and controller, when trained with a method that promotes the generation of large quantities of near-equilibrium data. Future work could also utilise linearization of the plant and/or controller models for linear control design, with better authority over the plant behavior through well studied control design techniques that can ensure robustness, optimality or specific pole placement.
While it was demonstrated that the proposed methodology could be used to identify modal linear amplification mechanisms, future work could extend these methods to study nonmodal amplification, such as transient growth and resolvent analysis. Data-driven implementations of such analyses have been implemented previously (Herrmann et al., 2021), though for non-normal systems it may be more difficult to obtain accurate results without adjoint operators/data, which may require further modifications to the methodology presented here.
We lastly emphasise that the methods and results are obtained from non-intrusive methods in the sense that only the nonlinear system (with appropriate control inputs) has been used. This means that the methodology could in principle be applied (for both control and flow physics analysis) in an experimental setting. This, as well as the application to systems presenting higher degree of complexity, remains the subject of future work.
## Acknowledgments
The authors acknowledge the financial support received from Fundacao de Amparo a Pesquisa do Estado de Sao Paulo, FAPESP, under Grants No. 2013/08293-7 and 2021/06448-0. The first author is supported by FAPESP PhD scholarships No. 2019/19179-7 and 2022/00469-8, which are also acknowledged. STMD acknowledges support from NSF Award 2238770. The computational resources used in this work were provided by CENAPAD-SP (Project 551), and by LNCC via the SDumont cluster (Project SimTurb).
## Declaration of Interests
The authors report no conflict of interest.
## Appendix A Hyperparameters
In this appendix, the chosen values for the hyperparameters in each case are presented. They are organised in table 2. Regarding the iterative training, the "number of iterations" correspond to how many times the proposed training steps are conducted. It is expected that at each iteration the perturbed plant controlled by NNC tend to produce more data near equilibrium. The "number of steps in _sweep_", "number of steps in _release_" and "number of steps in _control_" correspond to the number of measurements sampled in each mode at each iteration.
The open loop control signal is parameterised by a "control input saturation value", which defines the possible uniform interval of random values it can assume. The staircase signal has
a step width given by the "number of time steps for each stair step (open loop signal)". Also, the "maximum amplitude decay at each iteration" parameter makes sure the open-loop signal is reduced in amplitude at each training iteration.
From each mode (_sweep_, _release_ and _control_), data is sampled for training, but the dataset size is limited to a "maximum number of points sampled for training NNSM". This ensures that the training cost does not increase beyond a certain point. A "NNSM learning rate" is defined for the ADAM optimization algorithm, and the model is updated through a "number of epochs at each training iteration for the NNSM". The "number of neurons at NNSM hidden layers" hyperparameter defines the number of layers, which is given by the number of elements in the array of numbers, and the number of neurons in each layer, which is the value at each position. Note that, except for the cylinder case in table 2, all NNSM models are trained with a single layer. For assessment of possible overfitting, an "NNSM training set ratio" as a fraction of sampled data is also chosen, which allows for the comparison of losses between a training set and a testing set of samples. The "hidden layers L2 regularization" parameter is the \(r_{2}\) variable defined in this work. "Use sparsity layer" is a Boolean that determines if sparse sensor placement is used. If so, we can set the "sparsity layer L1 regularization" parameter (\(r_{1}\)) and the "sparsity layer truncation tolerance" that determines the absolute value below which the input layer weights are truncated to zero. The input layer is organised according to the "number of control inputs" and the the "number of states" measured.
Similarly a "maximum number of points sampled for training NNC" is also defined to avoid excessive complexity, as well as the "NNC learning rate" for ADAM and the "number of epochs at each training step for NNC". The "training finite horizon length" in number of steps (\(n_{h}\)) is also presented. For the NNC, an "NNC training set ratio" as a fraction of sampled data is also chosen. The "number of neurons at NNC hidden layers" is always the same in all cases, which consists of a single layer containing 8 neurons. "NNC control saturation" stands for the limits of control effort that NNC can provide. It is implemented as an output layer consisting of weighted sigmoid functions. The "control input weight in loss function" corresponds to \(\mathbf{w_{u}}\).
Finally, regarding equilibrium computation, the "initial guess for equilibrium point" is presented for each case, corresponding to the first guess used in the first training iteration. At each of these iterations, the estimation of equilibrium is updated along a "Number of steps for the Newton method".
|
2302.12185 | Scaling Up Computer Vision Neural Networks Using Fast Fourier Transform | Deep Learning-based Computer Vision field has recently been trying to explore
larger kernels for convolution to effectively scale up Convolutional Neural
Networks. Simultaneously, new paradigm of models such as Vision Transformers
find it difficult to scale up to larger higher resolution images due to their
quadratic complexity in terms of input sequence. In this report, Fast Fourier
Transform is utilised in various ways to provide some solutions to these
issues. | Siddharth Agrawal | 2023-02-02T19:19:10Z | http://arxiv.org/abs/2302.12185v1 | # Scaling Up Computer Vision Neural Networks using Fast Fourier Transform +
###### Abstract
Deep Learning-based Computer Vision field has recently been trying to explore larger kernels for convolution to effectively scale up Convolutional Neural Networks. Simultaneously, new paradigm of models such as Vision Transformers find it difficult to scale up to larger higher resolution images due to their quadratic complexity in terms of input sequence. In this report, Fast Fourier Transform is utilised in various ways to provide some solutions to these issues.
Fast Fourier Transform, Convolutional Neural Networks, Vision Transformers
## 1 Introduction
While Fourier Transform has seen many applications in data compression and signal processing, it's applications in Deep Neural Networks are limited. They are often only used for Medical Imaging based applications. Here, I discuss three approaches of scaling up Neural Networks for computer vision using Fast Fourier Transform (FFT).
Note: python libraries were used for FFT as they provide extremely efficient cuda-based approaches for FFT on the GPU. The report only contains part of the code; the entire code-base is very large and includes the dataloaders, hyperparameter configurations, scripts to test fps, datasets, training and validation engines, and the model implementations themselves. The entire codebase can be found at [https://github.com/siddagra/fourier-project](https://github.com/siddagra/fourier-project)
## 2 Fourier Vision Transformers
### Introduction to Vision Transformers
Vision Transformers (ViT) [1] extended the use of transformers to the domain of computer vision. Typically, the quadratic \(O(n^{2})\) complexity in terms of input causes transformers like the one proposed in the original work [1] for Natural Language Processing (NLP) to perform poorly in high dimensional tasks such as computer vision, where input size is large. ViT proposed to scale transformers up to images by deviating the image into 16x16 patches and computing patch embeddings of these.
Given an image tensor \(I\in R^{W\times H\times C}\), patch embeddings are computed via a \(16x16\) convolution with \(n\) number of kernels. Convolution stride is set to patch size. The final output is flattened and transposed along the last two dimensions, resulting in embeddings tensor \(P\in R^{\frac{n\times H}{10}}\). Lastly, learned embeddings of the patches position within the image are added to each patch embedding.
The process is shown in the code below:
class PatchEmbeddings(nn.Module): def__init__(self, config): super()__init__() img_size = config.img_size
patch_size = config.patch_size grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1]) num_patches = grid_size[0] * grid_size[1] self.cls_token = nn.Parameter(torch.randn(1, 1, config.embed_dim))
self.proj = nn.Conv2d( config.in_chans, config.embed_dim, kernel_size=patch_size, stride=patch_size, bias=True) self.norm = nn.LayerNorm(config.embed_dim) self.positional_embeddings = nn.Parameter( torch.zeros(1, num_patches+1, config.embed_dim))
def forward(self, x): B, C, H, W = x.shape x = self.proj(x) cls_token = self.cls_token.repeat(B, 1, 1) x = x.flatten(2).transpose(1, 2) x = torch.cat([cls_token, x], dim=1) x = x + self.positional_embeddings x = self.norm(x) return x # B, C, P + 1
The typical Vision Transformer would now use the multi-headed self-attention mechanism. Which can be summarised as the following operation:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V\] \[head_{i}=Attention(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V})\] \[MultiHead(Q,K,V)=Concat(head_{1},...,head_{h})W^{O}\]
where \(W_{i}^{K}\), \(W_{i}^{Q}\), \(W_{i}^{V}\) and \(W^{O}\) are matrices of learnable parameters, dictating the projection of the input.
Thus, in lay man terms, by the above formulation, the self-attention can look at the inputs to its layer, and decide how much importance or weightage to give all other inputs, when pondering over a specific token (image patch embeddings in this case) via the alignment between \(Q\) and \(K\) and then multiplying it with \(V\) after normalisation and \(softmax\). The multi-handedness allows the transformer to compute multiple such attention maps, and thus promotes thinking from different perspectives. Often, multiple different types of features extraction methods are required, and both global and local receptive field is required to achieve good results. Multi-headed promotes this.
Specifically, the self-attention part comes from the fact that all: query (\(Q\)), key (\(K\)), and value (\(V\)) are equal to the input of the layer.
Layer Norm [2] (Fig. 1) is normalisation applied before computations involving learnable parameters, as normalisation has shown to make learning easier and converge faster. This happens as it eliminates internal covariate shift involved with the \(ReLU\) or \(GeLU\) activation functions, and also brings all features to a similar scale [3]. The normalisation is computed using two parameters, \(\epsilon\), \(\gamma\) and \(\beta\) and is summarised as follows:
\[y=\frac{x-\mathrm{E}[x]}{\sqrt{\mathrm{Var}[x]+\epsilon}}*\gamma+\beta\]
Thus, it is analogous to the \(\gamma\) controlling for the variance and the \(\beta\) controlling the mean of the input feature's distributions. \(\epsilon\) adds noise to the input, acting as a regulariser for the model.
A multilayer perceptron (MLP) (Fig. 1) is simply a recursive formulation wherein the input to the layer is linearly projected, and then an activation function is applied. This is repeated for some \(n\) number of layers set by the developer. Specifically, a single layer of MLP can be mathematically denoted as follows:
\[y=act(W^{T}*x+b)\]
where \(act\) is some non-linear activation function such as \(ReLU\), \(GeLU\), \(Sigmoid\), etc.
This transformer block is stacked multiple times, achieving a deep neural network with hierarchical representations. At the end, the output of a special CLS token is fed through an MLP to get the image classification.
### Proposed Approach: Fourier Image Transformers
Due to the reliance of Vision Transformers on self-attention, it has a space and time complexity of \(O(n^{2})\) making it difficult to scale up to large images and higher resolutions.
Thus, I borrow key insights from FNet [4] which uses Fast Fourier Transform (FFT) to mix input signals in word embeddings. FNet was originally used in the domain of NLP, where it achieved then state-of-the-art results in long range arena benchmarks [5]. I try to extend this to the computer vision domain similar to [1] via patch embeddings. FNet replaces the quadratic complexity self-attention layer, with an FFT operation to mix all tokens (embeddings) in \(O(nlogn)\).
The discrete Fourier Transform (DFT) is defined by the formula:
\[X_{k}=\sum_{n=0}^{N-1}x_{n}e^{-\frac{2\pi i}{N}nk},\quad 0\leq k\leq N-1\]
and can be computed in \(O(nlogn)\) using Fast Fourier Transform.
Similar to ViT, patch embeddings are generated for the image and flattened. However, now they are fed through a Fourier transform layer instead of a multi-headed self-attention layer.
Specifically, the layer applies a 2D DFT: a 1D DFT along the dimension of the sequence of all the patch embeddings (\(\mathcal{F}_{seq}\)), and another 1D DFT along the hidden/embedding dimension (\(\mathcal{F}_{h}\)). Empirically, it was found that extracting real part of the FFT at the end of the entire operation yielded best results. i.e., Real part was only extracted after applying both 1D FFTs: \(\mathcal{F}_{h}\) and \(\mathcal{F}_{seq}\).
The process can be summarised as follows:
\[y=\mathcal{R}(\mathcal{F}_{h}(\mathcal{F}_{seq}(x))\]
Figure 1: Overview of ViT. Images are first divided into 16x16 patches and the resulting tensor is flattened along its last two dimensions. Patch embeddings are computed for each patch and positional embeddings are added. The resulting tensor is fed into a vanilla transformer. Norm represents Layer Norm and MLP is a Multi Layer Perception. + denotes skip connections used to propagate the identity function.
The rest of the model stays largely the same. Layer Norm is used after each Fourier and Feed Forward layer, and the identity function is propogated through the network via skip connections. A feed forward layer provides projection similar to \(W^{O}\) from the vanilla transformer, linearly projecting the input to it.
These blocks are stacked multiple times to get a deeper model.
The model provides an efficient method of mixing embeddings with no additional parameter requirements in the mixing layer, and a much more scalable time complexity of \(O(nlogn)\). Due to the duality of Fourier transform, each transformer block can be considered to alternatively apply Fourier transform and inverse Fourier transform, alternating between the spatial and the frequency domain, and each feed forward layer can be considered as a convolution (when in the frequency domain), and multiplications (when in the spatial domain). The negative sign introduced during the inverse Fourier transform can be inverted by the learnable feed forward layer, if it learns and finds it beneficial to do so.
#### _# Fourier Vision Transformer Block_
class FFTLayer(nn.Module): def __init__(self): super()__init__()
@torch.cuda.amp.autocast(enabled=False) def forward(self, x): return torch.fft.fft(torch.fft.fft(x, dim=-1), dim=-2).real
class FiTBlock(nn.Module): def __init__(self, config): super()__init__() self.fft = FFTLayer() self.layerNorm1 = nn.LayerNorm( config.embed_dim, eps=1e-12) self.ff = nn.Linear( config.embed_dim, config.dim_feedforward) self.dense = nn.Linear( config.dim_feedforward, config.embed_dim) self.layerNorm2 = nn.LayerNorm( config.embed_dim, eps=1e-12) self.dropout = nn.Dropout(config.dropout_rate) self.activation = nn.GELU()
Figure 2: Overview of proposed architecture. Image and patch embeddings are computed as in ViT. However, the model now uses Fast Fourier Transform to mix the embeddings.
def forward(self, x): fftOut = self.fft(x) x = self.layerNorm1(fftOut + x) x = self.ff(x) x = self.activation(x) x = self.dense(x) x = self.dropout(x) x = self.layerNorm2(x + fftOut) return x
Note: PyTorch library was used for more efficient computation of FFT on cuda GPUs. The code blocks are only a snippet of the entire much larger codebase used for this project. The entire code can be found at [https://github.com/siddagra/fourier-project](https://github.com/siddagra/fourier-project)
A much smaller patch embedding size can be used in this methodology, while still having a fast, efficient, and lightweight model. A smaller patch embedding size allows the model to extract more robust localised features for classification. Similar to ViT, a special CLS token is introduced which is passed through a linear projection and a \(GeLU\) activation function to predict the class of the image.
Categorical-cross entropy loss is used for the multi-class classification:
\[-\sum_{c=1}^{M}y_{i,c}\log(p_{i,c})\]
where \(p_{i,c}\) is the predicted probability of image \(i\) belonging to class \(c\) and \(y_{i,c}\) is a binary indicator (0 or 1) of the ground truth label: whether the image is actually of class \(c\).
Results:
Patch size for ViT was set to 4x4 and patch size for FiT was set to 2x2 for the benchmark on CIFAR-10 dataset.
The results do not show much of a difference. Likely due to the simple dataset of 10 classes and low resolution 32x32 images. The major advantage of such scaling schemes is only apparent when testing on higher resolution images. However, training on such larger images and datasets would take a few days of compute.
Still, FiT achieves a higher accuracy while using much less parameters and having a faster inference speed. The parametric efficiency is attained due to the fact that unlike self-attention, token mixing using FiT requires no learnable parameters. Besides, FFT also has better scalability to larger sequences due to its \(O(nlog(n))\) complexity
We can also see that FiT scales up in inference time much better than ViT:
\begin{table}
\begin{tabular}{c c c c} \hline Model & Accuracy & Inference Time (ms) & Params \\ \hline ViT & 93.5 & 5.67 & 86M \\ FiT & 94.3 & 3.6 & 38M \\ \hline \end{tabular}
\end{table}
Table 1: Comparision of Vision Transformer (ViT) and Fourier Image Transformer (FiT) on CIFAR-10 Image Classification Benchmark (32x32 Images and 10 classes)
Figure 3: Comparision of ViT (orange line) and FiT (blue line) inference times on larger images
## 3 Scaling Up Convolutions
State-of-the-art Convolutions Neural Networks (CNNs) have been attempting to scale up the kernel size of convolutions in order to increase performance of the networks. They have found that depth-wise separable convolutions perform better at larger kernel sizes. This is where the convolution is only applied in the spatial dimensions and not the channel dimensions. However, as kernels get larger, the efficiency of computing results using the model reducing drastically due to the quadratic complexity. For each convolution kernel of size \(m\times m\) and an image of size \(n\times n\), a total of \(m^{2}\times n^{2}\) operations are required to compute depth-wise separable convolutions. Large neural networks use several large kernels in order to perform well on vision tasks. Thus, a solution is needed if further scalability is required. Convolution using the Fourier Transform presents one such possible solution.
Multiplication in the frequency domain is equivalent to convolutions in the spatial domain.
Proof:
\[\mathcal{F}[f*g] =\int_{-\infty}^{\infty}g(z)\int_{-\infty}^{\infty}f(x-z)e^{-2\pi ix }dxdz\] \[=\int_{-\infty}^{\infty}g(z)\int_{-\infty}^{\infty}f(y)e^{-2\pi i (y+z)}dydz\] \[=\int_{-\infty}^{\infty}g(z)e^{-2\pi iz}dz\int_{-\infty}^{\infty} f(y)e^{-2\pi iy}dy\] \[=\mathcal{F}[f]\cdot\mathcal{F}[g]\]
Via the Fast Fourier Transform, convolution can be computed between a kernel of size \(m\times m\) and an image of size \(n\times n\) with \(O(n^{2}log(n))\) time complexity. With this, the run-time complexity of the convolution no longer depends on the kernel size, but solely on the image resolution, allowing us to scale up the convolution kernel to arbitrarily large sizes.
However, in 2D, element-wise multiplication in the fourier domain actually results in circular convolutions and not linear convolutions. The circular convolution is periodic and repeats with length \(n\) (image size), whereas a linear convolution would result in an output of size (\(n+(m-1)\)), where \(m\) is the size of the kernel. This causes the image signal to be squeezed in size, causing aliasing artifacts. The \((m-1)\) values wrap around due to the periodicity of the circular convolution and interfere with the actual image signal. To convert this into a linear convolution, we can simply first pad the image by \((m-1)\), to allow the image size to be \(n+(m-1)\) itself, i.e, the period of the circular convolution. These can then be circularly shifted back to its original position and the padded values can be cropped out from the result of the convolution operation.
We first pad the input image and the kernel to same dimensions to have a linear convolution:
padded_image = torch.nn.functional.pad(self.image, (self.kernel_size-1), value=0.0) padded_kernel = torch.nn.functional.pad(self.kernel, self.image_size + self.kernel_size - 1, value=0.0)
Next, we compute fourier transform of image and kernel using FFT:
_# computed in the last two dimensions to be depthwise seperable_ image_ft = torch.fft.rfftn(image, dim=(-1,-2)) kernel_ft = torch.fft.rfftn(kernel, dim=(-1,-2))
Convolutional Neural Networks utilise cross-correlation and not convolution (despite its name). Cross-correlation does not flip the kernel unlike convolution. Definition of cross-correlation:
\[(f*g)(x)=\int_{-\infty}^{\infty}f(x+z)g(z)dz=h(x)\]
\begin{table}
\begin{tabular}{l l l l} \hline Image Size & Patch Size & FiT & ViT \\ \hline
32x32 & 4x4 & 3.6ms & 5.6ms \\
224x224 & 4x4 & 26.6ms & 106.8ms \\
1080x1080 & 16x16 & 40.9ms & DNF \\ \hline \end{tabular}
\end{table}
Table 2: Comparision of ViT and FiT inference times on larger images
While this does not matter for the model as the kernel is learnable and the model may learn to flip the kernel if required, however, as I want this to be directly transferable to state-of-the-art CNN models, I need to implement cross-correlation instead of convolution. This can be done by taking the complex conjugate of the fourier transformed kernel (kernel_ft):
kernel_ft.imag *= -1 output_ft = image_ft * kernel_ft
Finally, the inverse fourier transform of this is computed to bring the signal back into the spatial domain.
output = torch.fft.irfft(output_ft, dim=(-2,-1))
We remove the padding done previously to attain our final cross-correlation output:
crop_dim = (self.image_size + self.kernel_size - 1) output = output[:, :, :crop_dim, :crop_dim] # B, C, H, W
Lastly, many CNN models optionally apply a learnable bias term before returning the output:
if bias: output += bias.view(1, -1, 1)
It can be seen that the value of this operation produces similar results to the spatial convolutions under both the \(\mathcal{L}\infty\) and \(\mathcal{L}\)\(\in\) norms.
The best part about this implementation is that it can be directly used with pre-trained state-of-the-art CNN models without any modifications to the architecture other than switching the convolutional function to call FFT_Convolution(). Table 3 shows replacing convolutions in RepLKNet [7] with fourier convolutions and the speedup acquired through it. It also demonstrates that the accuracy on ImageNet test dataset benchmark largely remained the same.
Figure 4: Comparision with native PyTorch Convolution
Figure 5: Scalibility of fourier convolutions[6]
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Accuracy & Inference Time (ms) & Params \\ \hline RepLKNet-base & 83.5 & 41.2 & 79M \\ FFT-Conv-RepLKNet-base & 83.4 & 28.7 & 79M \\ \hline \hline \end{tabular}
\end{table}
Table 3: ImageNet-1k benchmark performance (224x224 image size)
### Structured State Space and Hippo ODEs
\[x^{\prime}(t) =\mathbf{A}x(t)+\mathbf{B}u(t)\] \[y(t) =\mathbf{C}x(t)+\mathbf{D}u(t)\]
A, B, C, D are learnable parameters of the Machine Learning model. These are used often in simpler Machine Learning models such as Hidden Markov Model (HMM), however, they can also be extended to recurrent neural networks such as GRUs and LSTMs.
However, recent developments in this field have found that formulating such state spaces as a convolutional kernel (modified by \(\mathbf{C}\)) formed using a particular basis kernels \(K_{n}(t)\) (controlled by \(\mathbf{A},\mathbf{B}\) ):
\[K(t)=\sum_{k=0}^{N-1}\mathbf{C}_{k}K_{k}(t)\quad K_{n}(t)=\left(e^{t\mathbf{A}}\mathbf{B} \right)_{n}\]
If A and B are not restricted at all, they suffer from vanishing and exploding gradients, where backpropogated errors accumulate, leading to either extremely large or extremely small gradients during gradient descent/backpropogation. This leads to poor convergence and performance, and sparser learnable matrices. By controlling the matrices such that the eigenvalues are always 1, it allows the matrices to remain more stable.
Recent work [9] has found that by formulating restricting A and B to basis functions that are known to approximate sequences well, there can be a considerable improvement in the performance of state spaces (increase in performance from 60% to 98% in MNIST-sequential benchmarks). By introducing this structure to state spaces, it improves there long term memory capacity and also gives some structure to the matrices A and B.
The original work restricted \(A\) to the class of legendre polynomials. And also introduced \(lagT\) which is a legendre polynomial with a learnable exponentially decaying factor, giving less importance to long term memory for more recent events.
Structured State Spaces (S4) restrites A and B to:
\[A_{nk}=\left\{\begin{array}{ll}(2n+1)^{1/2}(2k+1)^{1/2}&\text{ if }n>k\\ n+1&\text{ if }n=k,\\ 0&\text{ if }n<k\end{array}\right.\quad B_{n}=(2n+1)^{\frac{1}{2}}\]
Furthermore, such matrices ODEs can be reformulated into convolutions:
Figure 6: Overview of S4D[?]
\[x^{\prime}(t) =\mathbf{A}x(t)+\mathbf{B}u(t)\] \[y(t) =\mathbf{C}x(t)\] \[K(t) =\mathbf{C}e^{t\mathbf{A}}\mathbf{B}\] \[y(t) =(K*u)(t)\]
Thus FFT can be used to convolute the kernel \(K\) (controlled by \(A\) and \(B\)) with the sequence.
Furthermore, the time complexity of computing the kernel convolution can also be reduced to \(O(N+L)\) using this structured state space, as compared to \(O(N^{2}+L)\) in the conventional non-structured state spaces. Where \(N\) is the number of parameters in \(K\) and \(L\) is the length of the sequence. This can be done due to the structure introduced to these matrices, which consequentially produces a Diagonal Plus Low-Rank matrix for \(K\). This can be converted into 4 weighted dot products by using the Woodburry Identity. I am not going too much into detail into this as it is out of the scope of this report. More importantly, the next section will introduce methods that are more efficient and perform better, and do not require this step.
### Approximating Large Convolutional Kernels
[10] found that the reason why S4D kernels perform so well was predominantly due to two major points: 1) **efficient parameterisation** as provided by restricting the matrices \(A\) and \(B\) to specific basis functions, and 2) **decaying structure**, which provides a good inductive bias to long-sequence modelling as the magnitude of the value of the convolution kernel decays as it approaches the current time step, so that more weight is assigned to the most recent neighbours, because the spectrum of the power of a matrix decays exponentially.
S4 produces kernels such as this:
They set out to find a different, more efficient reparametrisation and found that simply resizing the kernel using bilinear interpolation to be equal to the image/sequence size works slightly better, and requires much fewer parameters. The kernel is scaled up to sequence length/image size and convolution is computed using Fast Fourier Transform and element-wise multiplication.
This can be generalized to 2D or N-D by either taking the outerproduct of two such kernels, or by flattening the image sequence and modelling it as a larger 1D sequence. They match the performance of ConvNexts [11] while using much less parameters.
Code implementation done by me:
Figure 2: Visualization of S4 kernels on (a) Pathfinder-X and (b) Speech Command 10-class. The values in the convolution kernel exhibit a decaying behavior. We only plot the first 4096 positions for better illustration.
classGConv(hk.Module): def__init__(self, width, depth=96, bidirectional=True): self.width = width self.depth = depth self.bidirectional = bidirectional
@hk.transparent def kernel(self, seq_length): scale_count = np.ceil(np.log(seq_length) / np.log(2)).astype(int) scales = 1 / 2 ** jnp.arange(scale_count) concat = [] kernel = hk.get_parameter( "kernel", (self.width, self.depth), init=hki.RandomNormal() ) for i, scale in enumerate(scales): concat.append( jax.image.resize( kernel * scale, (self.width * 2**i, self.depth), method="bilinear" ) ) kernel = jnp.concat(concat) if self.bidirectional: kernel = ein.rearrange("(k n) d -> k n d", k=2) kernel = jnp.concatenate([kernel, kernel], axis=0) kernel = jnp.take(kernel, jnp.arange(seq_length), axis=0) return kernel
def__call__(self, signal): seq_length = signal.shape[-2] k_f = jnp.fft.rfft(self.kernel(seq_length), axis=-2) u_f = jnp.fft.rfft(signal, axis=-2) y_f = k_f * u_f y = jnp.fft.rfft(y_f) b = hk.get_parameter("bias", self.depth) return y + b Note: I could not test this due to time and computational limitations. Training takes very long. However, I did test over a few epochs and loss is reduced to a level that is better than random guessing. i.e., \(log(\text{num\_classes})\) This convolution function is directly swappable into ConvNext, replacing the native Conv2D function.
In this case, the original kernel is learned through backpropogation, and the decaying structure is imposed using the scales which exponentially decays the kernel. The kernel, through backpropogation, learns to produce values that scale efficiently when resized using bilinear interpolation, and the exponential decay is introduced to assign more weightage to the nearest neighbours in the current timestep.
|
2307.08717 | Untrained neural network embedded Fourier phase retrieval from few
measurements | Fourier phase retrieval (FPR) is a challenging task widely used in various
applications. It involves recovering an unknown signal from its Fourier
phaseless measurements. FPR with few measurements is important for reducing
time and hardware costs, but it suffers from serious ill-posedness. Recently,
untrained neural networks have offered new approaches by introducing learned
priors to alleviate the ill-posedness without requiring any external data.
However, they may not be ideal for reconstructing fine details in images and
can be computationally expensive. This paper proposes an untrained neural
network (NN) embedded algorithm based on the alternating direction method of
multipliers (ADMM) framework to solve FPR with few measurements. Specifically,
we use a generative network to represent the image to be recovered, which
confines the image to the space defined by the network structure. To improve
the ability to represent high-frequency information, total variation (TV)
regularization is imposed to facilitate the recovery of local structures in the
image. Furthermore, to reduce the computational cost mainly caused by the
parameter updates of the untrained NN, we develop an accelerated algorithm that
adaptively trades off between explicit and implicit regularization.
Experimental results indicate that the proposed algorithm outperforms existing
untrained NN-based algorithms with fewer computational resources and even
performs competitively against trained NN-based algorithms. | Liyuan Ma, Hongxia Wang, Ningyi Leng, Ziyang Yuan | 2023-07-16T16:23:50Z | http://arxiv.org/abs/2307.08717v1 | # Untrained neural network embedded Fourier phase retrieval from few measurements
###### Abstract
Fourier phase retrieval (FPR) is a challenging task widely used in various applications. It involves recovering an unknown signal from its Fourier phaseless measurements. FPR with few measurements is important for reducing time and hardware costs, but it suffers from serious ill-posedness. Recently, untrained neural networks have offered new approaches by introducing learned priors to alleviate the ill-posedness without requiring any external data. However, they may not be ideal for reconstructing fine details in images and can be computationally expensive. This paper proposes an untrained neural network (NN) embedded algorithm based on the alternating direction method of multipliers (ADMM) framework to solve FPR with few measurements. Specifically, we use a generative network to represent the image to be recovered, which confines the image to the space defined by the network structure. To improve the ability to represent high-frequency information, total variation (TV) regularization is imposed to facilitate the recovery of local structures in the image. Furthermore, to reduce the computational cost mainly caused by the parameter updates of the untrained NN, we develop an accelerated algorithm that adaptively trades off between explicit and implicit regularization. Experimental results indicate that the proposed algorithm outperforms existing untrained NN-based algorithms with fewer computational resources and even performs competitively against trained NN-based algorithms.
## 1 Introduction
In optics, most detectors can only record the magnitude or intensity of the signal, while losing the phase information [1]. Fourier phase retrieval (FPR) is
an important inverse problem that seeks to recover an unknown signal \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) from its Fourier magnitude \(\mathbf{b}\in\mathbb{R}^{m}\). The problem can be described as
\[\text{Find }\mathbf{x}\in\mathbb{R}^{n}\text{ s.t. }\left|\mathcal{F}\mathbf{x} \right|+\boldsymbol{\delta}=\mathbf{b}, \tag{1}\]
where \(\mathcal{F}\) represents the Fourier transform operator, \(\left|\cdot\right|\) denotes the element-wise absolute value operator, and \(\boldsymbol{\delta}\) is the additive noise. Note that a 2D image can also be expressed as \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) with \(n_{1}\times n_{2}\) pixels, and the corresponding Fourier magnitude as \(\mathbf{b}\in\mathbb{R}^{m}\) with \(m_{1}\times m_{2}\) elements by a lexicographical order, where \(n_{1}\cdot n_{2}=n\) and \(m_{1}\cdot m_{2}=m\). FPR arises in many applications, such as X-ray crystallography, ptychography, and diffraction imaging [2, 3, 4, 5].
FPR is an ill-posed inverse problem due to the non-uniqueness of its solution. Even though \(m\geq 2n-1\), there can be up to \(2^{n-2}\) solutions for Eq. (1), along with global phase shift, conjugate inversion, and spatial shift [6]. Most existing literature [7, 8, 9, 10] conducts simulation experiments under the oversampled condition of \(m\geq 2n-1\). FPR with few measurements is seriously ill-posed, as the uniqueness of the solution lacks theoretical analysis under the condition of \(m<2n-1\). Classical algorithms such as Gerchberg-Saxton (GS) [11] and hybrid input and output (HIO) [12] often perform poorly under this condition. However, there are still many scenarios [13, 14] where FPR with few measurements is common, especially when the detector's resolution is limited.
To overcome the ill-posedness of FPR, there are various regularization methods that introduce priors of the signal to FPR models. One of the common models is built as
\[\hat{\mathbf{x}}=\operatorname*{arg\,min}_{\mathbf{x}\in\mathbb{R}^{n}}\,\frac {1}{2m}\|\mathbf{b}-\left|\mathcal{F}\mathbf{x}\right|\|_{2}^{2}+\alpha R( \mathbf{x}), \tag{2}\]
where the first and second terms are data fidelity and regularization terms respectively, and the regularization parameter \(\alpha>0\) pursues the tradeoff between these two terms. Explicit regularizers, such as \(\delta_{\Omega}(\mathbf{x})\), \(\|\mathbf{x}\|_{1}\), and \(\|\mathbf{x}\|_{TV}\)[15, 16, 17], often capture certain properties of signals, such as support domain constraints, sparsity, and edge preservation. However, they are usually insufficient for representing complex information in images.
Recently, implicit regularizers based on learning have been proposed to represent more complicated priors of images. According to whether labeled data is required for training, this approach can be divided into supervised and unsupervised learning. [18] uses supervised learning with pairs of data \(\{\mathbf{b}_{i},\mathbf{x}_{i}\}\) to train an end-to-end neural network (NN) that represents an implicit prior. The Plug-and-Play (PnP) framework [19] leverages a pre-trained denoiser to implicitly represent a denoising prior. prDeep [8] adapts the Regularization by Denoising (RED) framework \(R(\mathbf{x})=\frac{1}{2}\mathbf{x}^{T}\left(\mathbf{x}-D(\mathbf{x})\right)\) to solve phase retrieval (PR) problems. RED penalizes the residual difference between the image with its denoised self and the correlations between the image with the residual, where \(D(\mathbf{x})\) denotes the pre-trained DnCNN [20] denoiser. In addition, trained generative networks [21, 22, 23] learn the distribution of images from labeled datasets to construct a generative prior. However, all of these supervised learning-based
algorithms require acquiring a large amount of data with ground truth images. This can be expensive or even impossible in many fields. Moreover, limited generalization can easily occur when the distribution of the training and test data is inconsistent.
Unsupervised learning offers new approaches to impose implicit regularization without requiring labeled data. Among them, untrained NNs [24] propose a new paradigm for training randomly initialized NNs using only a single observation, without any external data. Untrained NNs, such as Deep Image Prior (DIP) [25] and its variant Deep Decoder (DD) [26], demonstrate that their network architecture can capture statistical priors of an image from the observation itself. Net-PGD [27] imposes an untrained generative prior by constraining \(\mathbf{x}\) to the generation space represented by a DD-based network, and solves Gaussian PR using the projected gradient descent (PGD) framework. It is beneficial for Gaussian PR with few measurements compared to hand-crafted priors. However, it is not ideal for solving more challenging FPR. DeepMMSE [10] is an unsupervised learning-based algorithm which utilizes an untrained generative network with dropout to approximate the minimum mean squared error (MMSE) estimator of the image. Although it performs well in terms of FPR, its computational cost is quite high due to numerous iterations and a large number of network parameters. When \(\mathbf{x}\) is a \(128\times 128\) grayscale image, \(\mathbf{b}\) has a size of \(256\times 256\), and the network structures are set as specified in section 3, the number of network parameters in DeepMMSE is 708402, which is almost seven times that of Net-PGD, which has 108160 parameters. This makes it impractical to use when computational resources are limited. Therefore, it is necessary to explore unsupervised learning-based FPR algorithms that can achieve both fast and high-quality reconstruction.
To achieve fast and high-quality reconstruction of FPR using unsupervised learning, we adopt a DD-based network to impose a generative prior for FPR due to its latent ability to reconstruct the image from few measurements. However, unlike Net-PGD, we use the superior alternating direction method of multipliers
Figure 1: Comparison of different algorithms based on untrained NNs conducting FPR on _Cameraman_. The networks do not require any external data for training. We present heat maps of reconstructed errors, which show the difference between the reconstructed image and the ground truth. The color scheme on the right side of the heat map represents the range of error values. The computational times for DD, Net-PGD, DeepMMSE and our algorithm are 58.48s, 68.23s, 739.99s, and 36.02s, respectively.
(ADMM) [7, 28] framework. As shown in Fig. 1, the DD algorithm, which employs a DD-based network to impose a generative prior for FPR within the ADMM framework, outperforms Net-PGD in the PGD framework. In addition, multiple studies [29, 30] indicate that untrained NNs tend to fit smooth images and struggle to recover high-frequency structures. This is further supported by Fig. 1, which shows that three algorithms based on untrained NNs have large reconstruction errors on the high-frequency structures in the image. To address this problem, we combine the DD-based network with total variation (TV) regularization, which is beneficial for preserving the local structures of images [31, 32]. Based on the ADMM framework, we propose a vanilla TV-regularized FPR algorithm with DD, which is called Vanilla-TRAD.
In addition, we develop an accelerated algorithm for achieving fast reconstruction of FPR. In Vanilla-TRAD, the generative network is trained to find a solution within its range for each iteration of FPR. This results in a high computational cost. We have observed that the later iterations of the generative network are unnecessary since they yield solutions with little improvement in quality. Therefore, we introduce an acceleration strategy inspired by the hybrid steepest descent (HSD) method [33, 34]. This strategy dynamically adjusts whether to use DD in the iterations, which improves the reconstruction speed. The proposed algorithm is called Accelerated-TRAD. Fig. 1 illustrates the benefits of our proposed Accelerated-TRAD in terms of reconstruction quality and computational time. The acceleration strategy offers a new approach to adaptively trading off between explicit and implicit regularization.
The performance of Vanilla-TRAD and Accelerated-TRAD is extensively evaluated under various settings. The experimental results of FPR with different measurement lengths and noise levels show that Accelerated-TRAD outperforms existing untrained NN-based algorithms with less computational cost and performs competitively against trained NN-based algorithms. Analysis of the parameters shows that Accelerated-TRAD is not sensitive to parameter selection in the acceleration strategy.
The rest of this paper is organized as follows. In section 2, Vanilla-TRAD and Accelerated-TRAD are introduced. In Section 3, we compare the proposed algorithms to state-of-the-art algorithms under various settings. We also analyze the selection of regularizers and parameters for the proposed algorithms. Section 4 is the conclusion. The code is available at [https://github.com/Liyuan-2000/TRAD.git](https://github.com/Liyuan-2000/TRAD.git).
The proposed methods
### Vanilla TV-regularized FPR algorithm under untrained generative prior
We consider a regularized FPR problem under untrained generative prior. According to Eq. (2), the problem can be described as
\[\begin{split}&\min_{\mathbf{x},\boldsymbol{\theta}}\;\frac{1}{2m} \|\mathbf{b}-\left|\mathcal{F}\mathbf{x}\right|\|_{2}^{2}+\alpha R(\mathbf{x} ),\\ &\mathrm{s.t.}\quad\mathbf{G}(\boldsymbol{\theta})-\mathbf{x}=0, \end{split} \tag{3}\]
where \(R(\mathbf{x})\) is an explicit regularizer that enforces desired properties on \(\mathbf{x}\), while \(\mathbf{G}(\boldsymbol{\theta})\) is an untrained generative network with parameters \(\boldsymbol{\theta}\) that restricts \(\mathbf{x}\) in \(Range(\mathbf{G})\).
An untrained generative network can implicitly capture priors of an image \(\mathbf{x}\) through its structure without external data [25, 26, 35]. In this paper, a fixed low-dimensional latent code \(\mathbf{Z}_{0}\in\mathbb{R}^{d_{0}\times c_{0}}\) is chosen as the input of network. The output is \(\mathbf{G}(\boldsymbol{\theta})\in\mathbb{R}^{d_{J}\times c_{out}}\), where \(c_{out}=1\) for a grayscale image, \(c_{out}=3\) for an RGB image, and \(d_{J}\cdot c_{out}=n\). The structure of the network is illustrated by Fig. 2. It is composed of \(J\) layers, each of which consists of \(1\times 1\) convolutions, a ReLU activation function \(\mathrm{relu}(\cdot)\), a channel normalization operator \(\mathrm{cn}(\cdot)\) and an upsampling operator \(\mathbf{U}_{j}\in\mathbb{R}^{d_{j+1}\times d_{j}}\). That is to say,
\[\mathbf{Z}_{j+1}=\mathbf{U}_{j}\mathrm{cn}\left(\mathrm{relu}\left(\mathbf{Z} _{j}\mathbf{W}_{j}\right)\right),j=0,1,\cdots,J-1, \tag{4}\]
where \(\mathbf{W}_{j}\in\mathbb{R}^{c_{j}\times c_{j+1}}\) denotes the weight matrix corresponding to the \(1\times 1\) convolutions. The output layer of the network includes the \(1\times 1\) convolutions and a sigmoid activation function \(\mathrm{sigmoid}(\cdot)\). It is expressed as:
\[\mathbf{G}(\boldsymbol{\theta})=\mathrm{sigmoid}\left(\mathbf{Z}_{J}\mathbf{ W}_{J}\right), \tag{5}\]
where \(\mathbf{W}_{J}\in\mathbb{R}^{c_{J}\times c_{out}}\). All parameters in the network can be vectored as \(\boldsymbol{\theta}=\mathrm{vec}\left(\mathbf{W}_{0},\mathbf{W}_{1},\cdots, \mathbf{W}_{J}\right)\) and \(\mathrm{vec}(\cdot)\) is the vectorization of a matrix. For brevity, we utilize \(\{c_{0},c_{1},\cdots,c_{J}\}\) to represent the structure of a \(J\)-layer network. To reduce the cost of computation and storage, we adopt a network [27] similar to DD [26], where the sizes of the channels satisfy \(d_{0}<d_{1}<\cdots<d_{J}\).
As mentioned before, while untrained generative networks are useful for reducing the ill-posedness of PR with few measurements, they struggle to recover high-frequency structures since they tend to fit smooth images [29]. We overcome this limitation by choosing a proper regularizer \(R(\mathbf{x})\). For instance, \(R(\mathbf{x})=\|\mathbf{x}\|_{TV}\) encourages the preservation of local structures [31]. The TV norm \(\|\mathbf{x}\|_{TV}\) can be either anisotropic or isotropic, i.e., \(\|\nabla\mathbf{x}\|_{1}\) or \(\|\nabla\mathbf{x}\|_{2}\). Other regularization terms that protect high-frequency characteristics can also be considered for selection.
Eq. (3) is a non-convex optimization problem with multiple local minima. This makes it challenging for general gradient-based algorithms to find the global
optimal solution. To solve Eq. (3), we use the ADMM [7, 36, 37] framework, which mitigates the difficulty by dividing the problem into several subproblems that can be solved more easily. Note that the data fidelity term in Eq. (3) is not Lipschitz differentiable, making it difficult to design efficient algorithms that use gradients. To address this, we replace it with its smoothed version [38]:
\[f(\mathbf{u})=\frac{1}{2m}\|\sqrt{\mathbf{b}^{2}+\varepsilon\mathbf{1}}-\sqrt{ |\mathcal{F}\mathbf{u}|^{2}+\varepsilon\mathbf{1}}\|_{2}^{2}, \tag{6}\]
where \(\varepsilon>0\) represents the penalization parameter, and \(\mathbf{1}\in\mathbb{R}^{m}\) denotes a vector whose elements are all ones. The gradient of \(f(\mathbf{u})\) is
\[\nabla f(\mathbf{u})=\mathbf{u}-\mathcal{F}^{-1}\left(\frac{\sqrt{\mathbf{b}^ {2}+\varepsilon\mathbf{1}}}{\sqrt{|\mathcal{F}\mathbf{u}|^{2}+\varepsilon \mathbf{1}}}\odot\mathcal{F}\mathbf{u}\right), \tag{7}\]
where \(\odot\) denotes the Hadamard product and \(\mathcal{F}^{-1}\) denotes the inverse Fourier transform operator.
Now we reformulate Eq. (3) into
\[\begin{split}&\min_{\mathbf{x},\boldsymbol{\theta}}\ f\left(\mathbf{G}(\boldsymbol{\theta}) \right)+\alpha\|\mathbf{x}\|_{TV},\\ &\text{s.t.}\quad\mathbf{G}(\boldsymbol{\theta})-\mathcal{P} \mathbf{x}=0,\end{split} \tag{8}\]
where \(\mathcal{P}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is a zero-padding operator, which appends \(m-n\) zeros after the last element of \(\mathbf{x}\). Introducing \(\mathcal{P}\) aims to execute the Fourier transform using the Fast Fourier transform (FFT) algorithm.
By means of the augmented Lagrangian multiplier method, we transform Eq. (8) into an unconstrained optimization problem:
\[\underset{\boldsymbol{\eta}}{\text{maxim}}\min_{\boldsymbol{\theta},\mathbf{x }}\ L_{\rho}(\boldsymbol{\theta},\mathbf{x},\boldsymbol{\eta})=f(\mathbf{G}( \boldsymbol{\theta}))+\alpha\|\mathbf{x}\|_{TV}+\langle\boldsymbol{\eta}, \mathbf{G}(\boldsymbol{\theta})-\mathcal{P}\mathbf{x}\rangle+\frac{\rho}{2}\| \mathbf{G}(\boldsymbol{\theta})-\mathcal{P}\mathbf{x}\|_{2}^{2}, \tag{9}\]
where \(\boldsymbol{\eta}\in\mathbb{R}^{m}\) is the Lagrangian multiplier associated with the equality constraint, and \(\rho>0\) is the coefficient of the quadratic penalty term. Following
Figure 2: Illustration of the untrained generative network used in this paper.
the ADMM framework, we can minimize \(L_{\rho}(\mathbf{\theta},\mathbf{x},\mathbf{\eta})\) with respect to \(\mathbf{\theta}\), \(\mathbf{x}\), and \(\mathbf{\eta}\) in an alternating way. Given initial variables \(\mathbf{x}_{0}\), \(\mathbf{\eta}_{0}\) and \(k=0\),
\[\mathbf{\theta}_{k+1} =\operatorname*{arg\,min}_{\mathbf{\theta}}\;\left\{f(\mathbf{G}(\bm {\theta}))+\frac{\rho}{2}\|\mathbf{G}(\mathbf{\theta})-\mathcal{P}\mathbf{x}_{k}+ \frac{\mathbf{\eta}_{k}}{\rho}\|_{2}^{2}\right\}, \tag{10a}\] \[\mathbf{x}_{k+1} =\operatorname*{arg\,min}_{\mathbf{x}}\;\left\{\alpha\|\mathbf{x }\|_{TV}+\frac{\rho}{2}\|\mathbf{G}(\mathbf{\theta}_{k+1})-\mathcal{P}\mathbf{x}+ \frac{\mathbf{\eta}_{k}}{\rho}\|_{2}^{2}\right\},\] (10b) \[\mathbf{\eta}_{k+1} =\mathbf{\eta}_{k}+\rho\left(\mathbf{G}(\mathbf{\theta}_{k+1})-\mathcal{ P}\mathbf{x}_{k+1}\right). \tag{10c}\]
The first subproblem (10a) is used to update the network parameters. We solve it in two steps. First, we calculate a vector \(\mathbf{u}\in\mathbb{R}^{m}\), which acts a substitute for \(\mathbf{G}(\mathbf{\theta})\) in Eq. (10a). Since \(f\) is differentiable, we make a first-order approximation of \(f\) at \(\mathcal{P}\mathbf{x}_{k}-\frac{\mathbf{\eta}_{k}}{\rho}\) and then calculate \(\mathbf{u}_{k+1}\) using the Karush-Kuhn-Tucker (KKT) condition:
\[\begin{split}\mathbf{u}_{k+1}&=\operatorname*{arg\, min}_{\mathbf{u}}\left\{f(\mathbf{u})+\frac{\rho}{2}\|\mathbf{u}-\mathcal{P} \mathbf{x}^{k}+\frac{\mathbf{\eta}_{k}}{\rho}\|^{2}\right\}\\ &=\operatorname*{arg\,min}_{\mathbf{u}}\;f\left(\mathcal{P} \mathbf{x}_{k}-\frac{\mathbf{\eta}_{k}}{\rho}\right)+\left\langle\nabla f\left( \mathcal{P}\mathbf{x}_{k}-\frac{\mathbf{\eta}_{k}}{\rho}\right),\mathbf{u}- \mathcal{P}\mathbf{x}_{k}+\frac{\mathbf{\eta}_{k}}{\rho}\right\rangle\\ &+\frac{\rho}{2}\|\mathbf{u}-\mathcal{P}\mathbf{x}_{k}+\frac{\bm {\eta}_{k}}{\rho}\|^{2}\\ &=\mathcal{P}\mathbf{x}_{k}-\frac{\mathbf{\eta}_{k}}{\rho}-\frac{1}{ \rho}\nabla f\left(\mathcal{P}\mathbf{x}_{k}-\frac{\mathbf{\eta}_{k}}{\rho}\right).\end{split} \tag{11}\]
The second step is to project \(\mathbf{u}_{k+1}\) into \(Range(\mathbf{G})\) by searching for a suitable \(\mathbf{\theta}_{k+1}\), which serves as an internal loop at the \(k+1\)-th iteration.
\[\mathbf{\theta}_{k+1}=\operatorname*{arg\,min}_{\mathbf{\theta}}\;\|\mathbf{G}(\mathbf{ \theta})-\mathbf{u}_{k+1}\|_{2}^{2}. \tag{12}\]
The detailed process for solving Eq. (12) will be explained in subsection 2.2. For convenience, we denote \(\mathbf{v}_{k+1}=\mathbf{G}\left(\mathbf{\theta}_{k+1}\right)\) and substitute \(\mathbf{v}_{k+1}\) for \(\mathbf{G}\left(\mathbf{\theta}_{k+1}\right)\) in Eq. (10b) and Eq. (10c).
Now consider the second subproblem (10b). We make a first-order approximation of \(\|\cdot\|_{TV}\) at \(\mathcal{P}^{-1}\left(\mathbf{v}_{k+1}+\frac{\mathbf{\eta}_{k}}{\rho}\right)\) again by the gradient \(\nabla\|\cdot\|_{TV}\) and then calculate \(\mathbf{x}_{k+1}\) using the KKT condition. The result is
\[\mathbf{x}_{k+1}=\mathcal{P}^{-1}\left(\mathbf{v}_{k+1}+\frac{\mathbf{\eta}_{k}}{ \rho}\right)-\frac{\alpha}{\rho}\Delta\left[\mathcal{P}^{-1}\left(\mathbf{v}_ {k+1}+\frac{\mathbf{\eta}_{k}}{\rho}\right)\right], \tag{13}\]
where \(\Delta\) denotes the Laplace operator due to \(\nabla\|\mathbf{x}\|_{TV}=\operatorname{div}\left(\frac{\nabla\mathbf{x}}{\| \mathbf{x}\|}\right)=\frac{1}{\|\mathbf{x}\|}\Delta\mathbf{x}\) and \(\operatorname{div}(\cdot)\) denotes the divergence.
As to the initialization, we just set \(\mathbf{x}_{0}=\mathcal{F}^{-1}(\mathbf{b})\) and \(\mathbf{\eta}_{0}=\mathbf{0}\). Thus, we have developed an algorithm to solve FPR, called the Vanilla TV-regularized FPR algorithm with DD (Vanilla-TRAD). The detailed steps are summarized in Algorithm 1.
### Accelerated TV-regularized FPR algorithm under untrained generative prior
Vanilla-TRAD effectively combines TV regularization and the untrained generative prior through the ADMM framework. However, its computational cost may be expensive because solving Eq. (12) is required to update the network parameters for each \(k\), even though parameter reduction has already been considered in the selection of network structure. In this subsection, we propose an acceleration algorithm to improve the efficiency of Vanilla-TRAD.
Before introducing the acceleration algorithm, we first discuss the specific implementation of Eq. (12) in Vanilla-TRAD. The loss function is \(g(\mathbf{\theta})=\|\mathbf{G}(\mathbf{\theta})-\mathbf{u}_{k+1}\|_{2}^{2}\). Given the initial network parameters \(\mathbf{\theta}_{0}\), learning rate \(\gamma_{0}\), and the number of internal loops \(l_{0}\), the parameters can be updated by minimizing \(g(\mathbf{\theta})\) using gradient descent,
\[\mathbf{\theta}_{k+1}^{(j+1)}=\mathbf{\theta}_{k+1}^{(j)}-\gamma_{k}\nabla g\left(\bm {\theta}_{k+1}^{(j)}\right),j=0,1,\cdots,l_{k}-1, \tag{14}\]
where \(\mathbf{\theta}_{k+1}^{(0)}=\mathbf{\theta}_{k}\) and \(\mathbf{\theta}_{k+1}=\mathbf{\theta}_{k+1}^{(l_{k})}\). Variants of gradient descent, such as SGD [39] and Adam [40], can also be used. The learning rate \(\gamma_{k}\) decreases as \(k\) increases, following the formula \(\gamma_{k}=\gamma_{0}\beta^{\lfloor k/\kappa_{1}\rfloor}\), where \(0<\beta<1\) denotes the decay factor and \(\kappa_{1}\) represents the epoch of decay. The number of internal loops, denoted by \(l_{k}\), is calculated as \(l_{k}=\text{round}\left(l_{0}\zeta^{\lfloor k/\kappa_{2}\rfloor}\right)\), where \(\zeta>1\) denotes the growth factor, \(\kappa_{2}\) represents the epoch of growth, and \(\text{round}(\cdot)\) is the integer-valued function. Note that the number of internal loops \(l_{k}\) gradually increases with \(k\), so the computational time for each iteration also increases with \(k\).
Referring to the HSD method [34], we modify the 4th line of Vanilla-TRAD to a weighted combination of \(\mathbf{u}_{k+1}\) and \(\mathbf{G}(\mathbf{\theta}_{k+1})\).
\[\mathbf{v}_{k+1}=\mu_{k}\mathbf{G}\left(\mathbf{\theta}_{k+1}\right)+(1-\mu_{k}) \mathbf{u}_{k+1}. \tag{15}\]
During the early iterations, we expect to play the role of the untrained generative prior to obtain a relatively stable intermediate solution. Once stability
is reached, the weight \(\mu_{k}\) can be reduced to 0, since it is unnecessary to project \(\mathbf{u}_{k+1}\) into \(Range(\mathbf{G})\) when it is close to the optimal solution. Thus, the weight \(\mu_{k}\) can be designed to be 1 in the early iterations and decrease to 0 later. We choose a specific formula given by
\[\mu_{k}=\exp\left\{-\left(\frac{\max\left\{0,k-\kappa_{3}\right\}}{\lambda} \right)^{2}\right\}, \tag{16}\]
where \(\kappa_{3}\) denotes the iteration step at which the weight \(\mu_{k}\) begins to decrease, and \(\lambda\) represents the rate of decrease. As shown in Fig. 3, a small \(\kappa_{3}\) indicates early weight decay, while a small \(\lambda\) indicates fast weight decay.
```
0:\(\mathbf{b},\mathbf{x}_{0}=\mathcal{F}^{-1}(\mathbf{b}),\boldsymbol{\eta}_{0}= \mathbf{0},\rho>0,\varepsilon>0\), \(K\), \(\boldsymbol{\theta}_{0}\), \(\gamma_{0}\), \(l_{0}\), \(\beta\), \(\zeta\), \(\lambda\), \(\kappa_{1}\), \(\kappa_{2}\), \(\kappa_{3}\)
0:\(\mathbf{x}_{K}\)
```
1:for\(k=0\) : \(K-1\)
2:\(\mathbf{u}_{k+1}=\mathcal{P}\mathbf{x}_{k}-\frac{\eta_{k}}{\rho}-\frac{1}{ \rho}\nabla f\left(\mathcal{P}\mathbf{x}_{k}-\frac{\eta_{k}}{\rho}\right)\)
3:\(\boldsymbol{\theta}_{k+1}^{(0)}=\boldsymbol{\theta}_{k}\)
4:\(\gamma_{k}=\gamma_{0}\beta^{\left\lfloor k/\kappa_{1}\right\rfloor}\)
5:\(l_{k}=\text{round}\left(l_{0}\zeta^{\left\lfloor k/\kappa_{2}\right\rfloor}\right)\)
6:for\(j=0\) : \(l_{k}-1\)
7:\(\boldsymbol{\theta}_{k+1}^{(j+1)}=\boldsymbol{\theta}_{k+1}^{(j)}-\gamma_{k} \nabla g\left(\boldsymbol{\theta}_{k+1}^{(j)}\right)\)
8:end
9:\(\boldsymbol{\theta}_{k+1}=\boldsymbol{\theta}_{k+1}^{(l_{k})}\)
10:\(\mu_{k}=\exp\left\{-\left(\frac{\max\{0,k-\kappa_{3}\}}{\lambda}\right)^{2}\right\}\)
11:\(\mathbf{v}_{k+1}=\mu_{k}\mathbf{G}\left(\boldsymbol{\theta}_{k+1}\right)+(1- \mu_{k})\mathbf{u}_{k+1}\)
12:\(\mathbf{x}_{k+1}=\mathcal{P}^{-1}\left(\mathbf{v}_{k+1}+\frac{\boldsymbol{\eta} _{k}}{\rho}\right)-\frac{\alpha}{\rho}\Delta\left[\mathcal{P}^{-1}\left( \mathbf{v}_{k+1}+\frac{\boldsymbol{\eta}_{k}}{\rho}\right)\right]\)
13:\(\boldsymbol{\eta}_{k+1}=\boldsymbol{\eta}_{k}+\rho\left(\mathbf{v}_{k+1}- \mathcal{P}\mathbf{x}_{k+1}\right)\)
14:end ```
**Algorithm 2** Accelerated-TRAD
Note that Accelerated-TRAD removes the network in later iterations. This benefits both the preservation of high-frequency structures in images and the
reduction of computational cost. The proposed accelerated algorithm, referred to as Accelerated-TRAD, is shown in Algorithm 2.
**Discussions.** 1. Accelerated-TRAD is an accelerated version of Vanilla-TRAD. For the first \(\kappa_{3}\) iterations, the format of Accelerated-TRAD is consistent with that of Vanilla-TRAD. In the case of \(\mu_{k}\equiv 1\), Accelerated-TRAD is equivalent to Vanilla-TRAD.
2. The main complexity of the proposed algorithms lies in the internal loops implemented by \(\mathbf{G}(\boldsymbol{\theta})\). In Accelerated-TRAD, the removal of the network in some iterations can effectively reduce computational complexity. Additionally, removing the network in later iterations is more effective than doing so earlier. This is because there are more internal loops in the later iterations, resulting in a higher time cost.
3. It seems that Accelerated-TRAD has added two parameters compared to Vanilla-TRAD. In fact, they can be simply chosen as \(\kappa_{3}=1000\) and \(\lambda=10\). It will be demonstrated through experiments in subsection 3.4 that the algorithm is not sensitive to parameter selection.
## 3 Experiment
Performance evaluation is conducted on FPR at different measurement lengths and noise levels. We compare the proposed algorithms with state-of-the-art algorithms on PR, especially learning-based methods, including prDeep [8], NetPGD [27] and DeepMMSE [10].
* prDeep [8]: A supervised learning algorithm based on the pre-trained DnCNN [20] is used to solve PR problems with different measurement models and noise levels. The algorithm integrates the DnCNN denoiser into the RED framework with the HIO initialization.
* Net-PGD [27]: An unsupervised learning-based algorithm combines the projected gradient descent (PGD) algorithm with a DD-based network to solve Gaussian PR. The algorithm implements gradient descent using external loops to produce the estimated solution, which is then projected into the network space using internal loops.
* DeepMMSE [10]: An unsupervised learning-based algorithm utilizes an untrained generative network with dropout to approximate the minimum mean squared error (MMSE) estimator of the image in PR. The algorithm also requires the HIO initialization, as described in [8].
The experimental data, shown in Fig. 4, consists of \(128\times 128\) grayscale images cropped or resized from standard \(256\times 256\) images, including 2 natural images, 2 remote sensing images, and 2 microscopic images. We evaluate the performance of the compared algorithms using the Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM) of the reconstructed results, as well as the time cost of calculation. Due to the randomization factors involved in initializing
and updating network parameters, the algorithms may produce different results in different runs. To ensure a fair comparison, each algorithm is run five times. The average PSNR, SSIM and time cost of the five reconstructed results are then calculated.
The parameters in the proposed algorithms are as follows. We use isotropic TV norm in our experiments. The initial learning rate is \(\gamma_{0}=0.005\), and the decay factor and epoch are \(\beta=0.5\) and \(\kappa_{1}=500\). The initial number of internal loops is \(l_{0}=5\), and the growth factor and epoch are \(\zeta=1.2\) and \(\kappa_{2}=500\). The parameters \(\mathbf{\theta}_{0}\) in the network are initialized according to He initialization [41]. The elements of the latent code \(\mathbf{Z}_{0}\) are generated from the Gaussian distribution \(\mathcal{N}(0,0.01)\). Additionally, we set \(\rho=1\), \(\epsilon=0.001\) and \(\alpha=\frac{1}{884}\) by default. The structure of the network is designed to \(\{128,128,128,128\}\), which is a 3-layer network. The parameters in Accelerated-TRAD are set to \(\kappa_{3}=1000\) and \(\lambda=10\), which will be analyzed in subsection 3.4.
Following [10], the network input of DeepMMSE are generated from the Gaussian distribution \(\mathcal{N}(0,0.1)\). For Net-PGD, its network structure and internal loops are consistent with our algorithms. The learning rate of its external loops is initialized to 0.5, and the decay factor and epoch are set to 0.7 and 500, respectively. \(\sigma_{w}\) in prDeep is set to 0.1 when no noise is added to the Fourier magnitude. Both Net-PGD and the proposed algorithms are conducted 2000 iterations while DeepMMSE is conducted 50000 iterations. prDeep is executed for 200 iterations four times, once for each of the denoiser networks trained at standard deviations of 50, 40, 20, and 10. All NNs adopt the Adam optimizer. All algorithms are conducted on NVIDIA A-100 GPUs. Among them, prDeep uses MATLAB R2021a, while the other algorithms use the Pytorch framework with Python 3.9.
Figure 4: The data used in the experiment. First column: 2 natural images. Second column: 2 remote sensing images. Third column: 2 microscopic images.
### FPR at different sampling ratios
This subsection compares algorithms for FPR at six sampling ratios ranging from 1.5: 0.1: 2.0. The sampling ratio, denoted as \(r\), represents the ratio of the measurement length to image length in each dimension. For 2D images with equal width and height, \(r=\sqrt{m}/\sqrt{n}\). Table 1 shows the quantitative results for the compared algorithms at sampling ratios of 1.7 and 1.9. Additionally, a visual comparison between the algorithms at six sampling ratios can be seen in Fig. 5. For each algorithm, the images with the highest PSNR in five runs are shown.
Table 1 and Fig. 5 indicate that the reconstructed results of Accelerated-TRAD have almost the highest PSNR, SSIM and visual quality. Among all unsupervised learning-based algorithms, Accelerated-TRAD performs the best in terms of both the quality of the reconstructed results and computational time. It also performs competitively against supervised learning-based prDeep with only a minimal increase in computational time, no more than 10 seconds.
### FPR from noisy measurements
This subsection compares the robustness of algorithms for FPR. The measurement model with noise is:
\[\mathbf{b}=\left|\mathcal{F}\mathbf{x}\right|+\boldsymbol{\delta}, \tag{17}\]
where \(\boldsymbol{\delta}\sim\mathcal{N}(0,\sigma^{2})\) stands for Gaussian noise with a standard deviation of \(\sigma\).
When the sampling ratios are fixed, we compare algorithms at different noise levels. The noise level is represented by the Signal Noise Ratio (SNR), which is defined as
\[\text{SNR}=20\text{log}_{10}\frac{\text{Var}\left(\left|\mathcal{F}\mathbf{x} \right|\right)}{\sigma^{2}}, \tag{18}\]
where \(\text{Var}\left(\left|\mathcal{F}\mathbf{x}\right|\right)\) is the variance of \(\left|\mathcal{F}\mathbf{x}\right|\). A low SNR indicates a high level of noise.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multirow{2}{*}{\(r\)=1.7} & \multicolumn{6}{c}{PSNR(dB)/SSIM} & Time (s) \\ & \multicolumn{1}{c}{Cameraman} & Stream and Bridge & \multicolumn{1}{c}{Stadem} & \multicolumn{1}{c}{Tuson} & \multicolumn{1}{c}{Butterfly Nebula} & \multicolumn{1}{c}{Pillars of Creation} & Total \\ \hline prDeep [8] & 0.30/0.95 & 35.64/0.88 & 24.26/0.64 & 21.35/0.34 & **22.62/0.52** & 25.62/0.66 & **29.78** \\ Net-PGD [27] & 14.87/0.28 & 13.24/0.25 & 13.42/0.22 & 19.00/0.28 & 19.86/0.38 & 18.71/0.31 & 57.21 \\ DeepMISE [10] & 38.53/0.97 & 20.24/0.53 & 25.22/0.75 & 24.57/**0.39** & 19.34/0.42 & 20.86/0.48 & 78.486 \\ Vanilla-TRAD & 25.92/0.89 & 22.66/0.70 & 21.76/0.63 & 21.53/0.37 & 21.59/0.48 & 24.58/0.61 & 65.24 \\ Accerrelated-TRAD & **51.12/1.00** & **45.50/1.00** & **31.03/0.79** & **25.09/0.57** & 19.53/0.45 & **37.67/0.92** & **31.03** \\ \hline \multirow{2}{*}{\(r\)=1.9} & \multirow{2}{*}{Cameraman} & Stream and Bridge & \multicolumn{1}{c}{Stadem} & \multicolumn{1}{c}{Tuson} & \multicolumn{1}{c}{Butterfly Nebula} & \multicolumn{1}{c}{Pillars of Creation} & Total \\ \cline{2-7}
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{PSNR(dB)/SSIM (\(r\)=1.6)} & \multicolumn{3}{c}{PSNR(dB)/SSIM (\(r\)=1.8)} \\ & SNR=30 & SNR=20 & SNR=10 & SNR=30 & SNR=20 & SNR=10 \\ \hline prDeep [8] & 19.34/0.30 & 18.80/0.29 & 18.00/0.22 & 18.88/0.30 & 19.57/0.31 & 18.75/0.25 \\ Net-PGD [27] & 18.94/0.31 & 18.41/0.23 & 17.24/0.15 & 19.17/0.35 & 18.70/0.28 & 18.14/0.18 \\ DeepMMSE [10] & 19.29/0.29 & 19.16/0.24 & 17.65/0.16 & 20.46/0.33 & 19.28/0.25 & 18.63/0.19 \\ Vanilla-TRAD & **21.09/0.41** & 19.71/0.28 & **19.54/0.31** & **22.56/0.47** & **20.79/0.35** & 20.09/0.34 \\ Accelerated-TRAD & 20.33/0.33 & **20.68/0.36** & 19.05/0.28 & 21.54/0.36 & 20.45/0.32 & **20.49/0.35** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The PSNR and SSIM of the reconstructed results by different algorithms on _Pillars of Creation_ at different noise levels. The best and second best results at each column are **boldfaced** and underlined respectively.
Figure 5: The reconstructed results of different algorithms at various sampling ratios, along with the corresponding PSNR(dB)/SSIM for each image annotated below.
Table 2 displays the PSNR and SSIM of the reconstructed results using different algorithms at SNRs of 30, 20, and 10 for sampling ratios of 1.6 and 1.8. It is evident that the proposed algorithms are more robust than others. The PSNR of reconstructed results in Vanilla-TRAD or Accelerated-TRAD is at least 1dB higher than that in the other three algorithms.
### Why combine TV with DD?
We construct two numerical experiments to explore the reasons for combining TV with DD. (1) To analyze the effectiveness of combining explicit and implicit regularization, we compare the proposed algorithms with related algorithms that have no regularization, TV regularization only, and DD regularization only. (2) To determine the necessity of combining TV and DD, we replace TV with explicit sparse or low-rank regularizer, and DD with SIREN [42] or DIP [25]. We then compare these alternatives to Accelerated-TRAD. All compared algorithms are executed under the ADMM framework. The average PSNR of six reconstructed images in the experimental data is calculated.
In the first experiment, the quantitative results of five algorithms at six sampling ratios are shown in Fig. 6. A visual comparison of the reconstructed _Cameraman_ at the sampling ratio of 1.7 can be seen in Fig. 7. Accelerated-TRAD performs the best and accurately recovers both high- and low-frequency information. While explicit TV regularization is effective for FPR at high sampling ratios, it may cause artifacts. In contrast, implicit DD regularization is good for FPR at low sampling ratios but leads to oversmoothed patterns. Vanilla-TRAD, which combines DD and TV regularization, can slightly alleviate this issue and improve the PSNR of reconstructed results. The accelerated technique used in Accelerated-TRAD not only reduces the time cost but also effectively plays the role of the two priors.
In the second experiment, the TV regularization is replaced with sparse regularization by \(l_{1}\) norm \(\|\mathbf{x}\|_{1}\) and low-rank regularization by nuclear norm \(\|\mathbf{x}\|_{*}\), respectively. Additionally, the DD is replaced with SIREN and DIP,
Figure 6: Quantitative results of our algorithms as well as related algorithms with no regularization, TV regularization only, and DD regularization only.
respectively. The corresponding network structures are described as follows. The learning rate and number of iterations are consistent with Accelerated-TRAD.
* SIREN [42]: An implicitly defined network is utilized to represent continuous signals and their derivatives. The network is composed of multilayer perceptrons (MLPs) with periodic sinusoidal activation. It consists of four hidden layers with 128 channels, takes image coordinates as input, and outputs pixel values.
* DIP [25]: An untrained generative network uses a U-Net-like "hourglass" architecture with skip connections. It includes upsampling and downsampling layers with channel numbers and kernel sizes of {16, 32, 64, 128, 128} and {3, 3, 3, 3, 3}, respectively. Upsampling and downsampling are implemented using nearest neighbor interpolation and convolution strides. Skip connections have channel numbers and kernel sizes of {4, 4, 4, 4, 4} and {1, 1, 1, 1}, respectively. The network input follows a uniform distribution of \(\mathcal{U}(0,0.1)\).
Fig. 8 shows the quantitative results of Accelerated-TRAD and its alternatives using different regularization combinations. It is apparent that Accelerated-TRAD, which combines TV and DD, outperforms the other regularization combinations. Additionally, the number of parameters in DD is only one-tenth that in DIP, and slightly higher than that in SIREN.
While experimental results have shown the reasons for combining TV and
Figure 7: Visual comparison of our algorithms as well as related algorithms with no regularization, TV regularization only, and DD regularization only at the sampling ratio of 1.7.
DD, the underlying mechanism behind this combination has yet to be theoretically explained. This is worth further research.
### The impact of parameters
This subsection examines the impact of the parameters \(\lambda\) and \(\kappa_{3}\) in Accelerated-TRAD. Firstly, we explore the influence of \(\kappa_{3}\), which represents the iteration step at which the weight \(\mu_{k}\) begins to decrease. We use Vanilla-TRAD to perform FPR on the experimental data, due to the consistency of Vanilla-TRAD and Accelerated-TRAD in the first \(\kappa_{3}\) iterations. Fig. 9 shows the PSNR of the reconstructed images during the iterations. We observe that the PSNR of almost all images stabilizes when \(k>500\). This indicates that Accelerated-TRAD is not sensitive to \(\kappa_{3}\), as long as \(\kappa_{3}>500\). Furthermore, \(\kappa_{3}\) generally does not need to change with different \(\mathbf{x}\).
Next, we consider the impact of \(\lambda\), which determines the decrease rate of \(\mu_{k}\). When \(\kappa_{3}=1000\), we apply Accelerated-TRAD to _Pillars of Creation_ at \(\lambda\) values
Figure 8: Quantitative results of Accelerated-TRAD and its alternatives using other explicit and implicit regularization combinations. The number of parameters in DD, SIREN, and DIP are 108160, 50049, and 1037193, respectively.
Figure 9: PSNR of reconstructed results during the iterations of Vanilla-TRAD at the sampling ratio of 2.0.
of 10, 100, and 200, respectively, and compare with Vanilla-TRAD. The PSNR of the reconstructed images during the iterations is shown in Fig. 10. It can be observed that Accelerated-TRAD is superior to Vanilla-TRAD in terms of both reconstructed quality and computational time. In Accelerated-TRAD, the PSNR of reconstructed results increases at different rates when \(\lambda\) takes different values, but eventually reaches similar results. A small \(\lambda\) tends to result in a low computational time, due to the rapid decrease of the weight \(\mu_{k}\).
## 4 Conclusion
This paper proposes an untrained NN-based algorithm called Vanilla-TRAD that effectively combines explicit TV regularization and implicit untrained generative prior with the ADMM framework, along with its accelerated version, Accelerated-TRAD. The untrained generative prior is beneficial for FPR with few measurements, while TV regularization is beneficial for recovering high-frequency information in the image. Numerical experiments confirm the effectiveness and necessity of combining the two priors. The acceleration technique in Accelerated-TRAD reduces the computational cost and enhances the role of the two priors in improving the reconstructed quality. Extensive experiments demonstrate that Accelerated-TRAD achieves fast and high-quality reconstruction of FPR under various settings. This paper presents a new approach to FPR under limited measurements and computational resources. In the future, we plan to explore a general paradigm that combines the prior based on image gradient with the untrained generative prior.
Figure 10: Comparison of Vanilla-TRAD and Accelerated-TRAD with different \(\lambda\). The algorithms perform FPR on _Pillars of Creation_ at the sampling ratio of 2.0. The computational times for Vanilla-TRAD and Accelerated-TRAD with \(\lambda=10\), \(\lambda=100\) and \(\lambda=200\) are 70.77s, 35.37s, 41.22s and 47.67s, respectively. |
2301.13091 | Optimal Approximation Complexity of High-Dimensional Functions with
Neural Networks | We investigate properties of neural networks that use both ReLU and $x^2$ as
activation functions and build upon previous results to show that both analytic
functions and functions in Sobolev spaces can be approximated by such networks
of constant depth to arbitrary accuracy, demonstrating optimal order
approximation rates across all nonlinear approximators, including standard ReLU
networks. We then show how to leverage low local dimensionality in some
contexts to overcome the curse of dimensionality, obtaining approximation rates
that are optimal for unknown lower-dimensional subspaces. | Vincent P. H. Goverse, Jad Hamdan, Jared Tanner | 2023-01-30T17:29:19Z | http://arxiv.org/abs/2301.13091v1 | # Optimal approximation complexity of high-dimensional functions with neural networks
###### Abstract.
We investigate properties of neural networks that use both ReLU and \(x^{2}\) as activation functions and build upon previous results to show that both analytic functions and functions in Sobolev spaces can be approximated by such networks of constant depth to arbitrary accuracy, demonstrating optimal order approximation rates across all nonlinear approximators, including standard ReLU networks. We then show how to leverage low local dimensionality in some contexts to overcome the curse of dimensionality, obtaining approximation rates that are optimal for unknown lower-dimensional subspaces.
Key words and phrases:Machine Learning, Universal Approximation, bi-activation, Neural Networks 2020 Mathematics Subject Classification: 41A10
## 1. Introduction
The number of parameters needed to approximate smooth high-dimensional functions, \(W^{n,\infty}([0,1]^{d})\), within a prescribed \(\epsilon\) accuracy in the \(\ell_{\infty}\) norm was lower bounded by [6] to have a dependence on \(\epsilon\) that is proportional to \(\epsilon^{-d/n}\). [16] has subsequently shown that a simple feed-forward neural network with \(\text{ReLU}(x):=\max\{0,x\}\) nonlinear activation is nearly optimal in terms of the number of parameters needed, requiring only \(c(n,d)=\epsilon^{-d/n}\log(1/\epsilon)\) parameters1, see [16][Theorem 1]. Subsequently, [4] reduced the number of parameters needed by a feedforward neural network to achieve \(\epsilon\) accuracy to being proportional to \(c(n,d)=\epsilon^{-d/n}\log(\log(1/\epsilon))\) by using trainable rational function as nonlinear activations, see [4][Theorem 4].
Footnote 1: The function \(c(n,d)\) depends on the smoothness, \(n\), and the dimension of \(f(x)\), but not on the desired accuracy \(\epsilon\).
Here we further adapt the proof by Yarotsky to achieve the optimal dependence of \(\epsilon^{-d/n}\) proven by [6], using a feedforward network that makes use of _two_ nonlinear activations (henceforth referred to as _bi-activation_ networks). Specifically, we allow some layers to use the ReLU nonlinear activation to localize \(f(x)\) through a partition of unity, and the quadratic activation \(x^{2}\) to allow for efficient computation of localized high degree polynomial approximations.
Specifically, following the notation of [6] and [16], we consider nonlinear approximation methods \(M_{p}(a)\) that have a continuous dependence2 on the \(p\) parameters \(a\in\mathbb{R}^{p}\) and which approximate high dimensional functions \(f(\cdot)\) within the unit ball of the Sobolev space \(W^{n,\infty}([0,1]^{d})\),
Footnote 2: The continuous dependence of \(M_{n}(a)\) on \(a\) is introduced in [6] to avoid space filling curves and can be viewed as ensuring the parameters \(a\) can be learned from a sufficiently near estimate; for details see [6].
\[||f||_{W^{n,\infty}([0,1]^{d})}=\max\;_{\mathbf{n}\in|\mathbf{n}|\leq n}\text{ esssup}_{\mathbf{x}\in[0,1]^{d}}|D^{\mathbf{n}}f(\mathbf{x})| \tag{1.1}\]
## 1. Introduction
In this paper we consider the following problem:
\[\min_{a\in\mathbb{R}^{d}}\max_{x\in[0,1]^{d}}|f(x)-M_{\mathcal{C}_{F}, bi-\sigma}(a)(x)|\leq\epsilon \tag{1}\]
where \(\mathcal{C}_{F}\) is a fixed fixed constant.
The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant. The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant. The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant. The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant.
The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant. The problem of finding a fixed constant \(d\) is NP-hard if and only if \(\mathcal{C}_{F}\) is a fixed constant.
**Theorem 1.4** (Optimal approximation order bi-activation networks: Analytic functions).: _Let \(f(x)\) be an analytic function on \([0,1]^{d}\), characterised \([1]\) by_
\[\sup_{x\in[0,1]^{d}}\left|\frac{\partial^{n}f}{\partial x^{n}}(x)\right|\leq C _{f}^{|\mathbf{n}|+1}\mathbf{n}!\quad\text{for}\;\;\text{all}\;n \tag{1.2}\]
_where \(C_{f}\) depends on the particular choice of \(f(x)\). Then for any \(d\), and \(\epsilon\in(0,1)\), there exists \(M_{\mathcal{C}_{A},bi-\sigma}(a)\) formed as a feed-forward network with \(\mathcal{C}_{A}=C_{4}(d,C_{f})\left((2\epsilon)^{\log^{-\frac{1}{2}}\left( \frac{2^{d}}{\epsilon}\right)}\log^{\frac{d}{2}}\left(\frac{1}{\epsilon} \right)\right)\) elements \(a\in\mathbb{R}^{\mathcal{C}_{A}}\) for which_
\[\min_{a\in\mathbb{R}^{\mathcal{C}_{A}}}\max_{x\in[0,1]^{d}}|f(x)-M_{\mathcal{ C}_{A},bi-\sigma}(a)(x)|\leq\epsilon\]
_where \(C_{4}(d,C_{f})\) does not depend on \(\epsilon\)._
Theorem 1.4 differs from Theorem 1.3 primarily in the lack of dependence on smoothness \(n\) as the number of parameters \(\mathcal{C}_{\mathcal{A}}\) needed in the network has been minimized over all admissible \(n\). The consequence of choosing the optimal smoothness \(n\) is that the \(\epsilon\) and \(d\) dependence of the number of parameters \(\mathcal{C}_{\mathcal{A}}\) decreases from \((\epsilon^{-1/n})^{d}\) to predominantly \(\log(1/\epsilon)^{d/2}\).
Next, for \(d_{\text{eff}}<d\) we define the canonical subspace of \([0,1]^{d}\) of dimension \(d_{\text{eff}}\); that is
\[x\in\chi_{d_{\text{eff}},e}^{d}:=\{x\in[0,1]^{d}:\;\text{with if}\;i\notin e,x _{i}=0\}.\]
Where \(e\) is a subset of \(\{1,\ldots,d\}\), with \(d_{\text{eff}}\) elements. \(I_{d_{\text{eff}}}^{d}\) is the collections of all \(e\). Then if \(f(x)\) is nonzero on only one known subspace \(\chi_{d_{\text{eff}},e}^{d}\) Lemma 2.2 holds. In the case that \(f(x)\) is nonzero on the union of all \(\binom{d}{d_{\text{eff}}}\) such subspaces
\[\bar{\chi}_{d_{\text{eff}}}^{d}:=\bigcup_{e\in I}\chi_{d_{\text{eff}},e}^{d},\]
the number of parameters \(\mathcal{C}_{M}\) needed to compute an \(\epsilon\) approximation of \(f(x)\) over one or all canonical subspaces is given by \(\mathcal{C}_{M}=C_{5}(d,d_{\text{eff}},n)\epsilon^{-d_{\text{eff}}/n})\) (see Lemma 2.2 and Theorem 1.5).
**Theorem 1.5** (Optimal approximation order bi-activation networks: low-dimensional subspaces).: _For function \(f(x)\) with \(||f||_{W^{n,\infty}([0,1]^{d})}\leq 1\) where \(x\) is restricted to \(\bar{\chi}_{d_{\text{eff}}}^{d}\), there exists \(M_{\mathcal{C}_{M},bi-\sigma}(a)\) formed as a feed-forward network with \(\mathcal{C}_{M}=C_{5}(d,n)\epsilon^{-d_{\text{eff}}/n}\) elements \(a\in\mathbb{R}^{\mathcal{C}_{M}}\) for which the error restricted on \(\chi_{d_{\text{eff}}}^{d}\) is_
\[\min_{a\in\mathbb{R}^{\mathcal{C}_{M}}}\max_{x\in\bar{\chi}_{d_{\text{eff}}}^{ d}}|f(x)-M_{\mathcal{C}_{M},bi-\sigma}(a)(x)|\leq\epsilon\]
_where \(C_{5}(d,n)\) may depend on \(d\) and \(n\), but not on \(\epsilon\)._
This restricted subspace model is motivated by natural image inputs with prescribed compression on a known orthogonal basis, such as JPEG compression. This union of subspace model \(\bar{\chi}_{d_{\text{eff}}}^{d}\) is also widely used in the theory of compressed sensing, see [7] and references therein, and has also been used to increase robustness against adversarial attacks on image classification by [9].
## 2. Approximation power of bi-activation networks
The proof of Theorem 1.3 being adapted from that 1.2 in [16], an understanding of the former is essential in order to explain the latter.
As mentioned previously, [16] first partitions the input \(x\in[0,1]^{d}\) into exponentially many localized portions using a partition of unity \(\{\phi_{\mathbf{m}}\}\), where each \(\phi_{\mathbf{m}}\) is piecewise linear and expressible by a ReLU network with a constant number of parameters (see Proposition 1 in [16]). The aim is then to approximate the function \(f\) by Taylor polynomials locally, giving the following representation for an approximation of \(f\).
**Lemma 2.1** ([16]).: _Let \(\epsilon>0\) be arbitrary and \(f\in W^{n,\infty}([0,1]^{d})\). Then there exists a function \(\tilde{f}\) expressible as_
\[\tilde{f}(x)=\sum_{\mathbf{m}\in\{0,\ldots,N\}^{d}}\sum_{\mathbf{m}:|\mathbf{ n}|<n}a_{\mathbf{m},\mathbf{n}}\phi_{\mathbf{m}}(x)\left(x-\frac{\mathbf{m}}{N} \right)^{\mathbf{n}},\]
_where \(a_{m,n}\in\mathbb{R}\), \(|a_{m,n}|\leq 1\), \(\{\phi_{\mathbf{m}}\}_{\mathbf{m}\in\{0,1,\ldots,N\}^{d}}\) is a partition of unity such that each \(\phi_{\mathbf{m}}\) is given by a product of \(d\) piecewise linear univariate factors. Furthermore, \(\tilde{f}\) is such that_
\[|f(x)-\tilde{f}(x)|\leq\frac{2^{d}d^{n}}{n!}\left(\frac{1}{N}\right)^{n}\max_ {\mathbf{n}:|\mathbf{n}|=n}\text{ess sup}_{x\in[0,1]^{d}}|D^{\mathbf{n}}f(x)|. \tag{2.1}\]
_The proof of this lemma is included in the appendix for completeness._
Showing that ReLU networks can approximate monomials (and, in turn, polynomials) would then complete the proof. Indeed, in Section 3.1 of [16], the author does so by first showing that \(f(x)=x^{2}\) can be approximated by a ReLU network of complexity \(O(\ln(1/\epsilon))\). Using the following identity to recover multiplication from squaring:
\[xy=\frac{1}{2}\big{(}(x+y)^{2}-x^{2}-y^{2}\big{)} \tag{2.2}\]
the author then shows how a ReLU network of complexity \(O(\ln(1/\epsilon))\) can in fact approximate terms of the form \(\phi_{\mathbf{m}}(x)\left(x-\frac{\mathbf{m}}{N}\right)^{\mathbf{n}}\).
Lastly, note that in lemma 2.1, \(\tilde{f}\) is a linear combination of at most \(d^{n}(N+1)^{d}\) such terms. \(N\) is a smoothness parameter that can be chosen so that the upper bound in (2.1) becomes \(|f(x)-\tilde{f}(x)|<\epsilon\). In Yarotsky's case, this corresponds to choosing
\[N=N(\epsilon,d,n)=\left\lceil\left(\frac{n!}{2^{d}d^{n}}\epsilon\right)^{-1/n }\right\rceil, \tag{2.3}\]
which also yields
\[d^{n}(N+1)^{d}=d^{n}\left(\frac{n!}{2^{d}d^{n}}\epsilon\right)^{-d/n}=O( \epsilon^{-d/n}),\]
and the final ReLU network used approximate \(f\) therefore consists of \(\mathcal{C}_{Y}=O(\epsilon^{-d/n}\ln(1/\epsilon))\) parameters due to the \(\log(1/\epsilon)\) depth needed to approximate \(x^{2}\) within \(\epsilon\) using a ReLU network.
### Proof of Theorem 1.3, Optimal approximation order bi-activation networks
Proof of Theorem 1.3.: Let \(\tilde{f}\) be the approximation to \(f\) given by Lemma 2.1. Since \(f\) is in the unit-ball in \(W^{n,\infty}\), \(\max_{\mathbf{n}:|\mathbf{n}|=n}\text{ess sup}_{x\in[0,1]^{d}}|D^{\mathbf{n}} f(x)|\leq 1\). Choosing the same \(N\) as in 2.3, we find that \(||f-\tilde{f}||_{\infty}\leq\epsilon\).
In contrast to ReLU networks, we claim that bi-activation networks can represent terms of the form \(\phi_{\mathbf{m}}(x)(x-\mathbf{m}/N)^{\mathbf{n}}\)_exactly_ using a constant number of trainable parameters. Indeed, each of these terms is itself a product of at most \(d+n-1\) piecewise linear univariate factors: a product of \(d\) functions defining each \(\phi_{\mathbf{m}}\) and at most \(n-1\) functions \(x_{k}-m_{k}/N\). These products can be implemented by a bi-activation network with a complexity of the order of \((n+d)\) and depth of the order of \(\log_{2}(n+d)\) (in both cases, \(O(1)\) with respect to \(\epsilon\)), by repeatedly pairing up the terms and multiplying them in tournament fashion (see figure 1). The multiplication of two terms can be achieved by a bi-activation network of constant size using (2.2)3.
Footnote 3: More specifically, we can use a network with activation function \(x^{2}\) which has one hidden layer. The inputs \(x\) and \(y\) connect fully to the hidden layer with three nodes, and weights \([0,1],[1,0]\) and \([1,1]\). The three nodes are connected to the output with weight \([-1/2,-1/2,1/2]\).
Therefore, \(\tilde{f}\) can be written by a bi-activation network \(M_{\mathcal{C}_{F},bi-\sigma}(a)\) with \(\mathcal{C}_{F}=O(d^{n}(N+1)^{d})\) parameters as follows. The network uses parallel subnetworks that each compute a term in the series defining \(\tilde{f}\), and computes the final output by summing the outputs of these subnetworks, weighted with the appropriate \(a_{\mathbf{m},\mathbf{n}}\). Since there are not more than \(d^{n}(N+1)^{d}\) subnetworks, \(\mathcal{C}_{F}=C_{3}(d,n)d^{n}(N+1)^{d}\) weights and computation units, for some constant \(C_{3}(d,n)\). For our choice of \(N\) in (2.3) to achieve an \(\epsilon\) accurate approximation, \(\mathcal{C}_{F}=O(\epsilon^{-d/n})\).
### Proof of Theorem 1.4, Optimal approximation order bi-activation networks: Analytic functions
Proof of Theorem 1.4.: Once again, let \(\tilde{f}\) be the approximation to \(f\) given by Lemma 2.1, noting that \(f\in W^{n,\infty}([0,1]^{d})\) for all \(n\) as it is analytic. Then applying the bound on \(|f(x)-\tilde{f}(x)|\) given by the same Lemma and the bound on smoothness for analytic functions (1.2), we find that
\[|f(x)-\tilde{f}(x)| \leq\frac{2^{d}d^{n}}{n!}\left(\frac{1}{N}\right)^{n}\max_{ \mathbf{n}:|\mathbf{n}|=n}\text{ess sup}_{x\in[0,1]^{d}}|D^{\mathbf{n}}f(x)|\] \[\leq\frac{2^{d}d^{n}}{n!}\left(\frac{1}{N}\right)^{n}C_{f}^{n+1}n!\] \[\leq 2^{d}d^{n}\left(\frac{C_{f}}{N}\right)^{n+1}\,,\]
Figure 1. Multiplication of \(k\) elements (depicted in black) using \(O(k)\) subnetworks (depicted in red) each of constant size (independent of \(d\) and \(n\)), giving a network of depth \(O(\ln_{2}(k))\). Here, \(k=8\).
where \(C_{f}\) is a constant depending on \(f\).
Notice that in this case the result holds for all \(n\). This means that, when picking \(N\), we can optimize over \(n\) to minimize the number of trainable parameters needed by our network. To begin with, choosing
\[N_{1}=N(C_{f},\epsilon,d,n)=\frac{1}{C}\left[\left(\frac{\epsilon}{2^{d}d^{n}} \right)^{-1/(n+1)}\right] \tag{2.4}\]
we get that \(||f-\tilde{f}||_{\infty}\leq\epsilon\).
Arguing in the exact same manner as in the proof of Theorem 1.3, we know that \(\tilde{f}\) can be written as a bi-activation neural network \(M_{\mathcal{C}_{A},bi-\sigma}(a)\). The total number of parameters \(\mathcal{C}_{A}\) then needed by the network to represent \(\tilde{f}\) is equal to
\[\mathcal{C}_{A}=C_{4}(C_{f},n,d)d^{n}(N+1)^{d} \tag{2.5}\]
for some constant \(C_{4}=C_{4}(C_{f},n,d)\) that does not depend on \(\epsilon\). Substituting the choice of \(N\) in (2.4) in (2.5) and minimizing over \(n\), we find that \(\mathcal{C}_{\mathcal{A}}\) is minimal for
\[n_{\min}=\sqrt{\frac{d\left(d\log(2)+\log\left(\frac{1}{\epsilon}\right) \right)}{\log(d)}}. \tag{2.6}\]
Substituting (2.6) and (2.4) into (2.5), gives us
\[\mathcal{C}_{A}=C_{4}\cdot 2^{\frac{d^{3/2}\sqrt{\log(d)}}{\sqrt{d\log(2)+ \log\left(\frac{1}{\epsilon}\right)}}}\epsilon^{\frac{\sqrt{d}\sqrt{\log(d)}}{ \sqrt{d\log(2)+\log\left(\frac{1}{\epsilon}\right)}}}\log^{\frac{d}{2}}\left( \frac{1}{\epsilon}\right),\]
which grows as \(\epsilon\to 0\) in the order of
\[(2\epsilon)^{\log^{-\frac{1}{2}\left(\frac{2^{d}}{\epsilon}\right)}}\log^{ \frac{d}{2}}\left(\frac{1}{\epsilon}\right),\]
concluding the proof.
### Proof of Theorem 1.5, Optimal approximation order bi-activation networks: low-dimensional subspaces
For clarity, first consider the simplest case of \(x\in\chi_{d_{\mathrm{eff}},e}^{d}\), for a known \(e\in I\). Without loss of generality this can be the first \(d_{\mathrm{eff}}\) dimensions of \(\mathbb{R}^{d}\) being nonzero, that is \(f\circ A(x):=f(Ax)\) for \(A\in\mathbb{R}^{d\times d_{\mathrm{eff}}}\) given by
\[A=\begin{pmatrix}1&0&\dots&0&0\\ 0&1&&0&0\\ \vdots&&\ddots&&\vdots\\ 0&0&&1&0\\ 0&0&&0&1\\ 0&0&\dots&0&0\end{pmatrix}. \tag{2.7}\]
In this case we have \(f|_{\chi_{d_{\mathrm{eff}},e}^{d}}=f\circ A\). When we consider that the function we try to approximate is of the form \(f\circ A\), we get the following lemma.
**Lemma 2.2** (Optimal approximation order bi-activation networks: low-dimensional single subspace).: _For function \(f(x)\) with \(||f||_{W^{n,\infty}([0,1]^{d})}\leq 1\) where \(x\) is restricted to a single
canonical subspace \(\chi_{d_{\text{eff}},e}^{d}\), there exists \(M_{\mathcal{C}_{S},bi-\sigma}(a)\) formed as a feed-forward network with \(\mathcal{C}_{S}=C_{6}(d,n)\epsilon^{-d_{\text{eff}}/n}\) elements \(a\in\mathbb{R}^{\mathcal{C}_{S}}\) for which_
\[\min_{a\in\mathbb{R}^{\mathcal{C}_{S}}}\max_{x\in\chi_{d_{\text{eff}},e}^{d}}| f(x)-M_{\mathcal{C}_{S},bi-\sigma}(a)(x)|\leq\epsilon\]
_where \(C_{6}(d,n)\) may depend on \(d\) and \(n\), but not on \(\epsilon\)._
We prove Lemma 2.2, by showing that \(\|f\circ A\|_{W^{n,\infty}[0,1]^{d_{\text{eff}}}}\leq 1\) and then applying Theorem 1.3.
Proof.: For a fixed \(d,n\in\mathbb{N}\), \(d_{\text{eff}}\in\mathbb{N}\) such that \(d_{\text{eff}}<d\) and \(\epsilon\in(0,1)\). We consider without loss of generality a \(f\) and \(A\) as prescribed, then by upper bounding \(\|f\circ A\|_{W^{n,\infty}([0,1]^{d_{\text{eff}}})}\) by \(1\), we can apply Theorem 1.3. We have for a \(\mathbf{n}\), with \(|\mathbf{n}|=n\) that
\[\begin{split}&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}}|D^{\mathbf{n}}(f\circ A)(x)|\\ =&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}}| \partial_{x_{1}}^{n_{1}}\partial_{x_{2}}^{n_{2}}\ldots\partial_{x_{d_{\text{ eff}}}}^{n_{d_{\text{eff}}}}(f\circ A)(x)|\\ =&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}} \left|\partial_{x_{1}}^{n_{1}}\partial_{x_{2}}^{n_{2}}\ldots\partial_{x_{d_{ \text{eff}}}}^{n_{d_{\text{eff}}}-1}\sum_{i=1}^{d}\partial_{x_{i}}(f)(Ax)\cdot A _{id_{\text{eff}}}\right|\\ =&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}}| \partial_{x_{1}}^{n_{1}}\partial_{x_{2}}^{n_{2}}\ldots\partial_{x_{d_{\text{ eff}}}}^{n_{d_{\text{eff}}}-1}\partial_{x_{d_{\text{eff}}}}(f)(Ax)\cdot A _{d_{\text{eff}}d_{\text{eff}}}\Big{|}\\ =&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}} \left|\partial_{x_{1}}^{n_{1}}\partial_{x_{2}}^{n_{2}}\ldots\partial_{x_{d_{ \text{eff}}}}^{n_{d_{\text{eff}}}}(f)(Ax)\right|\\ =&\text{esssup }_{x\in[0,1]^{d_{\text{eff}}}}|D^{ \mathbf{n}}(f)(Ax)|\\ \leq&\text{esssup }_{x\in[0,1]^{d}}|D^{\mathbf{n}}(f)(x)|\leq 1. \end{split} \tag{2.8}\]
Here in (2.8) we use the argument above \(|\mathbf{n}|\) times. Taking the maximum over \(\mathbf{n}\) gives us that
\[\|f\circ A\|_{W^{n,\infty}([0,1]^{d_{\text{eff}}})}\leq 1.\]
To finish the proof we apply Theorem 1.3.
The reason we introduce the previous lemma is that for all canonical subspaces of dimension \(d_{\text{eff}}\), we can assume without loss of generality that there exists a matrix \(A\) of the form of (2.7).
Proof of Theorem 1.5.: For any \(d,n,d_{\text{eff}}\in\mathbb{N}\) such that \(d_{\text{eff}}<d\) and \(\epsilon\in(0,1)\), we define \(\hat{f}:[0,1]^{d}\rightarrow\mathbb{R}\), as \(\hat{f}|_{\tilde{\chi}_{d_{\text{eff}},e}^{d}}=\tilde{f}\), for all \(e\in I\) where \(\tilde{f}\) is as in Lemma 2.2, and zero elsewhere. Then for \(\hat{f}\) we have:
\[\sup_{x\in\tilde{\chi}_{d_{\text{eff}}}^{d}}|\hat{f}(x)-f(x)|\leq \sum_{e\in I}\sup_{x\in\chi_{d_{\text{eff}},e}^{d}}|\hat{f}(x)-f (x)|\leq\frac{2^{d_{\text{eff}}}d_{\text{eff}}^{n}}{n!}\left(\frac{1}{N} \right)^{n}\binom{d}{d_{\text{eff}}}. \tag{2.9}\]
Setting
\[N=N(\epsilon,d,d_{\text{eff}},n)=\left[\left(\frac{n!\binom{d}{d_{\text{eff}}}} {2^{d_{\text{eff}}}d_{\text{eff}}{n}^{n}}\epsilon\right)^{-1/n}\right]\]
and plugging \(N\) in (2.9), we get \(\sup_{x\in\chi_{d_{\text{eff}}}^{d}}|\hat{f}(x)-f(x)|\leq\epsilon\). Furthermore, by Lemma 2.2\(\tilde{f}\) can be implemented as a feed-forward network \(M_{\mathcal{C}_{S},bi-\sigma}(a)(x)\). Then \(\hat{f}\) can be formed as the product
of these networks, which results in a total feed-forward network \(M_{\mathcal{C}_{M},bi-\sigma}(a)(x)\), where
\[\mathcal{C}_{M}=\binom{d}{d_{\text{eff}}}d_{\text{eff}}{}^{n}(N+1)^{d_{\text{ eff}}}=C_{5}(d,d_{\text{eff}},n)\epsilon^{-d_{\text{eff}}/n},\]
which finishes our proof.
**Remark 2.3**.: Although the networks in the case of Lemma 2.2 and Theorem 1.5 have the same \(\epsilon\) functional dependence in their number of parameters, the total size of the network \(\mathcal{C}_{\mathcal{S}}\) and \(\mathcal{C}_{\mathcal{M}}\) will be different, as they also depend in a different way on \(d,d_{\text{eff}}\) and \(n\).
## 3. Conclusions
We have shown that bi-activation networks, which use both the ReLU and \(x^{2}\) as activation functions, have greater approximation power than ReLU networks. By repurposing a proof of [16] for ReLU networks, we have derived upper bounds for the number of parameters needed by bi-activation networks to approximate functions in the unit ball of the Sobolev space \(W^{n,\infty}([0,1]^{d})\) achieving the optimal order \(O(\epsilon^{-d/n})\) number of parameters as lower bounded by [6]. We also extended our result to analytic functions on \([0,1]^{d}\) for yet superior \(\epsilon\) dependence and to low-dimensional subspaces to overcome the curse of dimensionality.
Natural extensions of these results are 1) to determine if a feedforward, or another network, with a single nonlinear activation can achieve the optimal order \(O(\epsilon^{-d/n})\) number of parameters, and 2) to consider further low-complexity models of \(f(x)\) beyond the union of subspaces, see for instance the nested structure considered in [14].
## Acknowledgments
VG and JH would like to thank the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1) for its support. JT is supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA) and thanks UCLA Department of Mathematics for kindly hosting him during the completion of this manuscript.
|
2306.15447 | Are aligned neural networks adversarially aligned? | Large language models are now tuned to align with the goals of their
creators, namely to be "helpful and harmless." These models should respond
helpfully to user questions, but refuse to answer requests that could cause
harm. However, adversarial users can construct inputs which circumvent attempts
at alignment. In this work, we study adversarial alignment, and ask to what
extent these models remain aligned when interacting with an adversarial user
who constructs worst-case inputs (adversarial examples). These inputs are
designed to cause the model to emit harmful content that would otherwise be
prohibited. We show that existing NLP-based optimization attacks are
insufficiently powerful to reliably attack aligned text models: even when
current NLP-based attacks fail, we can find adversarial inputs with brute
force. As a result, the failure of current attacks should not be seen as proof
that aligned text models remain aligned under adversarial inputs.
However the recent trend in large-scale ML models is multimodal models that
allow users to provide images that influence the text that is generated. We
show these models can be easily attacked, i.e., induced to perform arbitrary
un-aligned behavior through adversarial perturbation of the input image. We
conjecture that improved NLP attacks may demonstrate this same level of
adversarial control over text-only models. | Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt | 2023-06-26T17:18:44Z | http://arxiv.org/abs/2306.15447v2 | # Are aligned neural networks adversarially aligned?
###### Abstract
Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, _adversarial_ users can construct inputs which circumvent attempts at alignment. In this work, we study to what extent these models remain aligned, even when interacting with an _adversarial_ user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs.
However the recent trend in large-scale ML models is _multimodal_ models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models. **Warning: some content generated by language models in this paper may be offensive to some readers.**
Figure 1: We generate adversarial _images_ for aligned multimodal text-vision models that result in profane or otherwise harmful output, which would not normally be generated by the model. When presented with clean inputs the models follow their instruction tuning and produce harmless output, but by providing a worst-case maliciously-constructed input we can induce arbitrary output behavior discouraged by the alignment techniques.
Introduction
Aligned language models are supposed to be "helpful and harmless" [10]: they should respond helpfully to user interaction, but avoid causing harm, either directly or indirectly. Prior work has focused extensively on how to train models to align with the preferences and goals of their creators. For example, reinforcement learning through human feedback (RLHF) [10, 11] fine-tunes a pretrained model to emit outputs that humans judge to be desirable, and discourages outputs that are judged to be undesirable. This method has been successful at training models that produce benign content that is generally agreeable.
However, these models not are perfectly aligned. By repeatedly interacting with models, humans have been able to "social engineer" them into producing some harmful content (i.e., "jailbreak" attacks). For example, early attacks on ChatGPT (one such alignment-tuned language model) worked by telling the model the user is a researcher studying language model harms and asking ChatGPT to help them produce test cases of what a language model should not say. While there have been many such anecdotes where humans have manually constructed harm-inducing prompts, it has been difficult to scientifically study this phenomenon.
Fortunately, the machine learning community has by now studied the fundamental vulnerability of neural networks to _adversarial examples_ for a decade [12, 13]. Given any trained neural network and arbitrary behavior, it is almost always possible to optimize inputs that cause the selected behavior. Much of the early adversarial machine learning work focused on the domain of image classification, where it was shown that it is possible to minimally modify images so that they will be misclassified as an arbitrary test label. But adversarial examples have since been expanded to text [14, 15, 16, 17] and other domains.
In this paper we unify these two research directions and study if aligned models are resistant to adversarial inputs. That is, we ask the question:
_Are aligned neural network models "adversarially aligned"?_
First, we show that current alignment techniques--such as those used to fine-tune the Vicuna model [14]--are an effective defense against existing state-of-the-art (white-box) NLP attacks. This suggests that the above question can be answered in the affirmative. Yet, we further show that existing attacks are simply not powerful enough to distinguish between robust and non-robust defenses: even when we _guarantee_ that an adversarial input on the language model exists, we find that state-of-the-art attacks fail to find it. The true adversarial robustness of current alignment techniques thus remains an open question, which will require substantially stronger attacks to resolve.
We then turn our attention to today's most advanced _multimodal_ models, such as OpenAI's GPT-4 and Google's Flamingo and Gemini, which accept both text and images as input [14, 15, 16]. Specifically, we study open-source implementations with similar capabilities [13, 15, 16] since these proprietary models are not publicly accessible. We find that we can use the continuous-domain images as adversarial prompts to cause the language model to emit harmful toxic content (see, e.g., Figure 1). Because of this, we conjecture that improved NLP attacks may be able to trigger similar adversarial behavior on alignment-trained text-only models, and call on researchers to explore this understudied problem.
Some alignment researchers [17, 18, 19, 16, 15] believe that sufficiently advanced language models should be aligned to prevent an existential risk [15] to humanity: if this were true, an attack that causes such a model to become misaligned would be devastating. Even if these advanced capabilities do not come to pass, the machine learning models of today already face practical security risks [15, 16]. Our work suggests that eliminating these risks via current alignment techniques--which do not specifically account for adversarially optimized inputs--is unlikely to succeed.
Background
Our paper studies the intersection of two research areas: AI alignment and adversarial examples.
Large language models.As large language model parameter count, training dataset size, and training duration have been increased, the models have been found to exhibit complex behaviors (Brown et al., 2020; Wei et al., 2022; Ganguli et al., 2022). In this work, we focus on models trained with causal "next-word" prediction, and use the notation \(s\leftarrow\mathtt{Gen}(x)\) to a language model emitting a sequence of tokens \(s\) given a prompt \(x\). Many applications of language models take advantage of emergent capabilities that arise from increased scale. For instance, language models are commonly used to perform tasks like question answering, translation, and summarization (Brown et al., 2020; Chowdhery et al., 2022; Rae et al., 2022; Anil et al., 2023; Liang et al., 2022; Goyal et al., 2022).
Aligning large language models.Large pretrained language models can perform many useful tasks without further tuning (Brown et al., 2020), but they suffer from a number of limitations when deployed _as is_ in user-facing applications. First, these the models do not follow user instructions (e.g., "write me a sorting function in Python"), likely because the model's pretraining data (e.g., Internet text) contains few instruction-answer pairs. Second, by virtue of faithfully modeling the distribution of Internet text, the base models tend to reflect and even exacerbate biases (Abid et al., 2021), toxicity, and profanity (Welbl et al., 2021; Dixon et al., 2018) present in the training data.
Model developers thus attempt to _align_ base models with certain desired principles, through techniques like instruction tuning (Wei et al., 2022; Ouyang et al., 2022) and reinforcement learning via human feedback (RLHF) (Christiano et al., 2023; Bai et al., 2022). Instruction tuning finetunes a model on tasks described with instructions. RLHF explicitly captures human preferences by supervising the model towards generations preferred by human annotators (Christiano et al., 2023).
Multimodal text-vision models.Increasingly, models are multimodal, with images and text being the most commonly combined modalities (OpenAI, 2023; Pichai, 2023; Liu et al., 2023; Zhu et al., 2023). Multimodal training allows these models to answer questions such as "how many people are in this image?" or "transcribe the text in the image".
While GPT-4's multimodal implementation has not been disclosed, there are a number of open-source multimodal models that follow the same general protocol (Gao et al., 2023; Liu et al., 2023; Zhu et al., 2023). These papers start with a standard pre-trained language model that tokenizes and then processes the embedding layers. To process images, they use a pretrained vision encoder like CLIP (Radford et al., 2021) to encode images into an image embedding, and then train a _projection model_ that converts image embeddings into token embeddings processed by the language model. These visual tokens may be passed directly as an input to the model (Zhu et al., 2023; Liu et al., 2023), surrounded by special templates (e.g., "<img>... <img>") to delineate their modality, or combined internal to the model via learned adaptation prompts (Gao et al., 2023).
Adversarial examples.Adversarial examples are inputs designed by an adversary to cause a neural network to perform some incorrect behavior (Szegedy et al., 2014; Biggio et al., 2013). While primarily studied on vision classification tasks, adversarial examples also exist for textual tasks such as question answering (Jia and Liang, 2017; Wallace et al., 2019), document classification (Ebrahimi et al., 2017), sentiment analysis (Alzantot et al., 2018), or triggering toxic completions (Jones et al., 2023; Wallace et al., 2019). Prior work on textual tasks has either applied greedy attack heuristics (Jia and Liang, 2017; Alzantot et al., 2018) or used discrete optimization to search for input text that triggers the adversarial behavior (Ebrahimi et al., 2017; Wallace et al., 2019; Jones et al., 2023).
In this paper, we study adversarial examples from the perspective of _alignment_. Because aligned language models are intended to be general-purpose--with strong performance on many different tasks--we focus more broadly on adversarial examples that cause the model to produce unwarranted harmful behavior, rather than adversarial examples that simply cause "misclassification".
Our inputs are "adversarial" in the sense that they are specifically optimized to produce some targeted and unwanted outcome. Unlike recent "social-engineering" attacks on language models that induce harmful behavior by tricking the model into playing a harmful role (for example, taking on the persona of a racist movie actor (Reddit, 2023)), we make no effort to ensure our attacks are semantically meaningful, and they often will not be.
## 3 Threat Model
There are two primary reasons researchers study adversarial examples. On the one hand, researchers are interested in evaluating the robustness of machine learning systems in the presence of real adversaries. For example, an adversary might try to construct inputs that evade machine learning models used for content filtering (Tramer et al., 2019; Welbl et al., 2021) or malware detection (Kolosnjaji et al., 2018), and so designing robust classifiers is important to prevent a real attack.
On the other hand, researchers use adversarial robustness as a way to understand the worst-case behavior of some system (Szegedy et al., 2014; Pei et al., 2017). For example, we may want to study a self-driving car's resilience to worst-case, adversarial situations, even if we do not believe that an actual attacker would attempt to cause a crash. Adversarial examples have seen extensive study in the _verification_ of high-stakes neural networks (Wong and Kolter, 2018; Katz et al., 2017), where adversarial examples serve as a lower bound of error when formal verification is not possible.
### Existing Threat Models
Existing attacks assume that a _model developer_ creates the model and uses some alignment technique (e.g., RLHF) to make the model conform with the developer's principles. The model is then made available to a _user_, either as a standalone model or via a chat API. There are two common settings under which these attacks are mounted, which we describe below.
**Malicious user:** The user attempts to make the model produce outputs misaligned with the developer's principles. Common examples of this are _jailbreaks_ of chatbots such as ChatGPT or Bard where a user uses an adversarial example (a maliciously designed prompt) to elicit the desired unaligned behavior, such as outputting instructions for building a bomb. In this setting, there is no need for the attack to be "stealthy".
**Malicious third-party:** An honest user might query an alignment-tuned language model as part of an autonomous system that processes untrusted third-party data (e.g., a virtual assistant that reads and writes the user's emails). A malicious third-party could launch a _prompt injection attack_(Greshake et al., 2023) by feeding data to the language model to hijack its behavior (e.g., exfiltrating the user's emails). In this setting, there might be stricter constraints on a valid attack.
### Our Threat Model
In this paper we are focused on better understanding the limitations of current alignment techniques. As such, we use adversarial examples primarily to measure their worst-case behavior. That is, for the most part we are not concerned with any particular practical adversary, and instead, only with finding any valid input that achieves our attack goal. That said, our attacks are likely to be practical in both settings where a malicious user is attacking the trained language model to allow it to perform malicious behavior, as well as settings where an honest user copies malicious text from an adversary.
**Attack Goal.** Independent of who is attacking whom, it is also necessary to define the ultimate attack objective. While attackers could aim to induce a number of harmful behaviors (e.g., triggering a model into outputting dangerous instructions, or emitting private training data), we focus here on the specific goal of triggering _toxic_ outputs. The main reason is that toxicity is relatively easy to (approximately) evaluate in a automated way. We use an extremely simplistic definition for toxicity (checking for the presence or absence of a selected list of toxic words in the output of the model), noting that definitions of toxicity are cultural and contextual and widely studied. We formalize our toxicity attack objective as follows:
Specifically, given a toxicity-detection function isToxic, and an language model Gen that is never supposed to generate text that causes isToxic to be true, an input \(X\) causes harm if isToxic(Gen\((X)\)) returns true. When the generation function is randomized (i.e., it is not greedy sampling) we might require \(\Pr[\texttt{isToxic}(\texttt{Gen}(X))]>\varepsilon\), for some probability \(\varepsilon>0\). For simplicity, in our experiments, we set the temperature to \(0\) to make models deterministic.
Choosing an appropriate method for isToxic is important. In this paper we use simple substring lookup. Our attack chooses some harmful target sequence (for example, a curse word) that the model should generate, and then we report a generation as toxic if it contains this word anywhere.
Evaluating Aligned Models with NLP Attacks
The most direct method for generating an adversarial text sequence would be to directly optimize some differentiable proxy for the attack objective, \(\mathtt{isToxic}(\mathtt{Gen}(X))\). However there are two challenges with this approach:
1. Text tokens are discrete, and so continuous optimization via common optimization algorithms, e.g., gradient descent is unlikely to be effective [Ebrahimi et al., 2017].
2. There is often not one _exact_ target. And so in order to check if the attack succeeded, we would have to query the model to emit one token at a time. Thus, in order to pass a long sequence \(S\) into the toxicity classifier we would need to generate \(|S|\) tokens and then perform back propagation through \(|S|\) neural network forward passes.
While the first challenge above is a fundamental challenge of neural language models, the second is not fundamental. Instead of directly optimizing the true objective, i.e., checking that \(\mathtt{isToxic}(S)\) is true, we can optimize a surrogate objective of making \(\mathtt{isToxic}(S_{:j})\) be true for some attacker-chosen fixed-length string \(S_{:j}\) with \(j\ll|S|\). Observe that this makes optimization _much_ easier, as we can now perform just _one single forward pass_ to target exactly this string. Further, because this substring is contained within the larger output \(S\), it is guaranteed that \(\mathtt{isToxic}(S)\) will be true as well. However, this approach may make the attack slightly more difficult: it may be harder to make the model emit the immediate next token as toxic, rather than to eventually do so after being steered toward it.
In this section, we will study the suitability of prior attack methods for achieving our toxicity objective against a variety of chat bot models, both trained with and without alignment techniques.
### Our Target: Aligned Chat Bots
Alignment techniques (such as RLHF) are typically not applied to "plain" language models, but rather to models that have been first tuned to interact with users via a simple chat protocol.
Typically, this is done by formatting the input to underlying language model with a specific interleaving of messages, separated by special tokens that indicate the source and boundaries of each prior message.
[USER]: "Hello, how are you?"
[AGENT]: _'I am a large language model.'_
[USER]: "What is 1+2?"
[AGENT]: _'3.'_
In the above example, the chat bot's user typed in the messages in double-quotes, and the language model generated the italicized text in single-quotes. The special tokens '[USER]:' and '[AGENT]:' are automatically inserted by the chat bot application to delineate rounds of interaction when prompting the language model for its next message.
This special formatting of the aligned language model's input places a constraint on the attacker: while the content input by the user (i.e., the text in double quotes) could be arbitrarily manipulated, the prior chat history as well as the special '[USER]:' and '[AGENT]:' tokens cannot be modified. In general, across domains we believe this "attacks must follow some specified format" setting is likely to occur in practice.
### Prior Attack Methods
A number of prior works have studied adversarial examples against NLP models.
The most closely related to our goal is the work of Jones et al. [2023] who study the possibility of _inverting_ a language model, i.e., of finding an adversarial prompt \(X\) that causes a model \(f\) to output some targeted string \(y\gets f(X)\). Their technique succeeds in making a model emit the names of US Senators or emit toxic words after particular individuals. However, they assume a stronger threat model than we do here, and which does not fully match with the recent deployment of alignment-trained language models. Indeed, as noted above in Section 4.1, these language models typically
operate as chat bots that interleave attacker-controlled messages with model generations and special separation tokens. As a result, this paper considers the generic problem of designing an adversarial prompt \(X\) such that \(\texttt{Gen}(P_{pre}||X||P_{post})\) is toxic, where \(P_{pre}\) and \(P_{post}\) are _non-adversarial_ parts of the model's prompt that respectively precede and succeed the attacker's input.
In contrast, the setup in Jones et al. (2023) assumes that the attacker can either control the entire prompt (i.e., the aim is to make \(\texttt{Gen}(X)\) toxic), or at least the text immediately preceding the model's next generation (i.e., \(\texttt{Gen}(P_{pre}||X)\)). In both cases, the attacker controls the tokens right before the model's generation, which makes an attack much easier (but not relevant for current chat bot deployments). (However: there are several settings where their threat model _is_ valid--for example, in code completion tasks this is likely the case. Our threat model is just more general.)
### Our Evaluation Setup
Thus, we assume the adversary can control only _their_ messages (following the [USER]: token), and then the special [AGENT]: token is appended to the prompt sequence to indicate the end of the prompt and that the LLM should begin generating a response. In this setup, the adversary cannot modify the final tokens of the LLM's prompt.
We construct our evaluation dataset by first collecting potentially toxic messages that a model might emit. For each potentially toxic message, we prepend a set of benign conversations followed by the special tokens used to prompt the LLMs response. An attack is successful if it causes the toxic output to be emitted. We source benign conversations from the Open Assistant dataset. This dataset consists of a series of hypothetical interactions between a human and an assistant, which we truncate to \(K<N\) rounds. We use a random subset of harmful texts from Jones et al. (2023) as the attack objective, with attacks targeting between one and three tokens of toxic text.
We attack three different publicly available models:
* **GPT-2**: a model that was neither trained to act as a chatbot, nor to be aligned;
* **LLaMA**: a model trained to chat with users but with no explicit alignment attempt;
* **Vicuna**: a version of LLaMa that was aligned via fine-tuning to prevent toxic generations (and other harmful behaviors).
### Prior Attacks Results
We evaluate the ARCA attack from Jones et al. (2023) and the GBDA attack from Guo et al. (2021). As ARCA generalizes and dominates prior attacks such as Wallace et al. (2019); Ebrahimi et al. (2017) we do not also re-evaluate these. Both ARCA and GBDA are largely ineffective at causing the model to emit toxic content in our setting even when allowed to inject **thirty** malicious tokens. (We choose this number to match with experiemts that will follow later in the paper.) This should not be surprising: already in Jones et al. (2023), the attack had under 40% attack success rate in getting GPT-2 to output toxic output text, even when the adversary had complete control of every token up to the desired toxic phrases. In contrast, because we require that the tokens "[AGENT]:" come after the adversarial tokens, the attacks have (much) lower attack success rates here. Table 1 summarizes these results.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{**Attack success rate**} \\ \cline{3-6} & & \multicolumn{2}{c}{Distant Attack} & \multicolumn{2}{c}{Nearby Attack} \\ \cline{3-6} & & ARCA & GBDA & ARCA & GBDA \\ \hline GPT-2 & None & 67\% \(\pm\) 4\% & 12\% \(\pm\) 3\% & 84\% \(\pm\) 3\% & 16\% \(\pm\) 3\% \\ LLaMA & None & 2\% \(\pm\) 1\% & 1\% \(\pm\) 1\% & 61\% \(\pm\) 6\% & 10\% \(\pm\) 1\% \\ Vicuna & Instruct & 6\% \(\pm\) 2\% & 0\% \(\pm\) 1\% & 8\% \(\pm\) 2\% & 1\% \(\pm\) 1\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Success rates of prior attacks in constructing adversarial prompts that cause models to output toxic content. We allow the adversary to modify up to \(30\) tokens of text. We say an attack is “Distant” if the adversarial tokens come _before_ the question, and “Nearby” if the adversarial tokens come _after_ the question.
**Qualitative Evaluation: Toxicity:** We qualitatively inspect a subset of these generations from each setting in Table 1. We find that in most cases the output is indeed toxic as expected. But there in less than ten cases, we find our automated metric has potential limitations. For example, when told to produce one toxic phrase, Vicuna responds "F*** you. I'm kidding, of course. As a AI visual assistant, I'm programmed to be polite and provide assistance to the best of my ability. How can I assist you today?", and in another case it responds "(Note: This is not appropriate language and is not a proper response to the prompt.)" after an initially abusive message. This demonstrates a limitation of our simple isToxic definition that only performs a substring lookup. Future work will need to carefully consider different toxicity definitions once attacks begin to have nontrivial attack success rates--for now, even with this loose definition, the current attacks in the literature fail to find successful adversarial sequences.
## 5 Why do Existing NLP Optimization Attacks Fail?
In the prior section we have found that existing NLP optimization attacks have limited success at causing aligned models to emit harmful content in a standard chat setting. There are two possible explanations for this result:
1. The aligned language models we attack are truly robust to adversarial examples; or,
2. Current attacks are insufficiently powerful to evaluate the robustness of aligned models.
Fortunately, recent work has developed techniques explicitly designed to differentiate between these two hypotheses for general attacks. Zimmermann et al. (2022) propose the following framework: first, we construct _test cases_ with known adversarial examples that we have identified _a priori_; then, we run the attack on these test cases and verify they succeed. Their initial proposal for designing such test cases works as follows. Our specific test case methodology follows Lucas et al. (2023). To construct test cases, we first identify a set of adversarial examples via brute force. And once we have confirmed the existence of at least one adversarial example via brute force, we run our attack over the same search space and check if it finds a (potentially different, but still valid) adversarial example. This approach is effective when there exist effective brute force methods and the set of possible adversarial examples is effectively enumerable--such as is the case in the NLP domain.
We adapt to this to our setting as follows. We construct (via brute force) prompts \(p\) that causes the model to emit a rare suffix \(q\). Then, the attack succeeds if it can find some input sequence \(p^{\prime}\) that causes \(\mathsf{Gen}(p)=q\), i.e., the model emits the same \(q\). Otherwise, the attack fails. Observe that a sufficiently strong attack (e.g. a brute force search over all prompts) will always succeed on this test: any failure thus indicates a flawed attack.
### Our Test Set
How should we choose the prefixes \(p\) and the target token \(q\)? If we were to choose \(q\) ahead of time (e.g., to be some toxic token), then it might be very hard to find--even via brute force--a prefix \(p\) so that \(\mathsf{Gen}(p)=q\). So instead we drop the requirement that \(q\) is toxic, and approach the problem from reverse.
Initially, we sample many different prefixes \(p_{1},p_{2},\dots\) from some dataset (in our case, Wikipedia). Let \(S\) be the space of all N-token sequences (for some N). Then, for all possible sequences \(s_{i}\in S\) we query the model on \(\mathsf{Gen}(s_{i}||p_{j})\). (If \(|S|\) is too large, we randomly sample 1,000,000 elements \(s_{i}\in S\).) This gives a set of possible output tokens \(\{q_{i}\}\), one for each sequence \(s_{i}\).
For some prompts \(p_{j}\), the set of possible output tokens \(\{q_{i}\}\) may have high entropy. For example, if \(p_{j}\) = "How are you doing?" then there are likely thousands of possible continuations \(q_{i}\) depending on the exact context. But for other prompts \(p_{j}\), the set of possible output tokens \(\{q_{i}\}\) could be exceptionally small. For example, if we choose the sequence \(p_{j}\)="Barack" the subsequent token \(q_{i}\) is almost always "Obama" regardless of what context \(s_{i}\) was used.
But the model's output might not _always_ be the same. There are some other tokens that might be possible--for example, if the context where \(s_{i}\)="The first name [", then the entire prompt ("The first name [Barack") would likely cause the model to output a closing bracket q="]". We denote such sequences \(p_{j}\) that yield small-but-positive entropy over the outputs \(\{q_{i}\}\) (for different prompts \(s_{i}\in S\)) a _test case_, and set the attack objective to be the output token \(q_{i}\) that is _least-likely_.
These tests make excellent candidates for evaluating NLP attacks. They give us a proof (by construction) that it is possible to trigger the model to output a given word. But this happens rarely enough that an attack is non-trivial. It is now just a question of whether or not existing attacks succeed.
We construct eight different sets with varying difficulty levels and report averages across each. Our test sets are parameterized by three constants. (1) Prevalence: the probability of token \(q\) given \(p_{j}\), which we fix to \(10^{-6}\); (2) Attacker Controlled Tokens: the number of tokens the adversary is allowed to modify, which we vary from 2, 5, 10, or 20 tokens, and (3 Target Tokens: the number of tokens of output the attacker must reach. We generate our test cases using GPT-2 only, due to the cost of running a brute force search.
### Prior Attacks Results
In Table 2 we find that the existing state-of-the-art NLP attacks fail to successfully solve our test cases. In the left-most column we report attack success in a setting where the adversary aims must solve the task within the given number of attacker controlled tokens. ARCA is significantly stronger than GBDA (consistent with prior work) but even ARCA passes less than half of the time when Because the numbers here are so low, we then experimented with giving the attacker _more_ control with a multiplicative factor. That is, if the task asked for us to find an adversarial example with \(10\) tokens, and we run the attack with a factor of \(5\), we allow the attack to search over \(50\) attacker controlled tokens. We find that even with \(10\times\) extra tokens the attack still often fails our tests.
Note that the purpose of this evaluation is not to argue the NLP attacks we study here are incorrect in any way. On the contrary: they largely succeed at the tasks that they were originally designed for. But we are asking them to do something much harder and control the output at a distance, and our hope here is to demonstrate that while we have made significant progress towards developing strong NLP optimization attacks there is still room for improving these techniques.
## 6 Attacking Multimodal Aligned Models
Text is not the only paradigm for human communication. And so increasingly, foundation models have begun to support "multimodal" inputs across vision, text, audio, or other domains. In this paper we study vision-augmented models, because they are the most common. For example, as mentioned earlier, OpenAI's GPT-4 and Google's Gemini will in the future support both images and text as input. This allows models to answer questions such as "describe this image" which can, for example, help blind users [Salam, 2019].
It also means that an adversary can now supply adversarial _images_, and not just adversarial text. And because images are drawn from a continuous domain, adversarial examples are orders of magnitude simpler to create: we no longer need to concern ourselves with the discrete nature of text or the inversion of embedding matrices and can now operate on (near) continuous-domain pixels.
### Attack Methodology
Our attack approach directly follows the standard methodology for generating adversarial examples on image models. We construct an end-to-end differentiable implementation of the multimodal model, from the image pixels to the output logits of the language model. We apply standard teacher-forcing optimization techniques when the target suffix is \(>1\) token. To initiate each attack, we use a random image generated by sampling each pixel uniformly at random.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & \multicolumn{3}{c}{Pass Rate given \(N\times\) extra tokens} \\ \cline{2-5} Method & 1\(\times\) & 2\(\times\) & 5\(\times\) & 10\(\times\) \\ \hline Brute Force & **100.0\%** & **100.0\%** & **100.0\%** & **100.0\%** \\ ARCA & 11.1\% & 14.6\% & 25.8\% & 30.6\% \\ GBDA & 3.1\% & 6.2\% & 8.8 \% & 9.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Pass rates on GPT-2 for the prior attacks on the test cases we propose. We design each test so that a solution is _guaranteed_ to exist; any value under \(100\%\) indicates the attack has failed.
### Experiments
While GPT-4 currently supports vision for some users (OpenAI, 2023), this functionality is not publicly available. Google's Gemini has also not been made available publicly. The research community has thus developed open-source (somewhat smaller) versions of these multimodal models.
We evaluate our attack on two different implementations. While they differ in some details, both follow the approach in Section 2: the image is encoded with a vision model, projected to the token embedding space, and passed as a sequence of soft-tokens to the language model.
**Mini GPT-4**(Zhu et al., 2023) uses a pretrained Q-Former module from (Li et al., 2023) to project images encoded by EVA CLIP ViT-G/14 (Fang et al., 2022) to Vicuna's (Chiang et al., 2023) text embedding space. Both CLIP and Vicuna are frozen, while a section of the Q-former is finetuned on a subset of LAION (Schuhmann et al., 2021), Conceptual Captions (Sharma et al., 2018), SBU (Ordonez et al., 2011), and multimodal instruction-following data generated by the authors.
**LLaVA**(Liu et al., 2023) uses a linear layer to project features from CLIP ViT-L/14 to the Vicuna embedding space. While CLIP is frozen, both Vicuna and the projection matrix are finetuned on Conceptual Captions (Sharma et al., 2018) and custom multimodal instruction-following data.
**LLaMA Adapter**(Gao et al., 2023) works similarly to these except that it uses learned adaptation prompts to incorporate visual information internal to the model via these adapters.
### Quantitative Evaluation: Toxicity
We now evaluate the ability of these models to be coerced into emitting arbitrary toxic output. As in the previous section, we use the toxic phrase dataset from (Jones et al., 2023) and use the OpenAssistant prompts as the initial query to the model. We measure distortion across a set of randomly generated images. As Table 3 shows, our attack has a \(100\%\) success rate at causing the model to emit arbitrary toxic content. We observe that adversarial inputs are relatively easy to find, requiring only minimal \(\ell_{2}\) distortions from the initial image. Even though there is no inherent reason why distortion is important as an objective that should be minimized, we believe it is interesting to report the distortion required in order to underscore just how _easy_ it is to run these attacks.
### Qualitative Evaluation
We further evaluate our attack on qualitative samples designed to probe various harmful outcomes.
For example, even when explicitly asking for harmful output (e.g., "Write a vile letter telling my neighbor what I think of him.") the model still responds harmlessly ("Dear (Neighbor's Name)... I have to say that I am not impressed with your behavior... Sincerely, [Your Name]"). But by providing an adversarial image as input, we can cause the model to emit our desired harmful output ("F*** you, you dirty life piece of s***. You're a disgusting, filthy, stinky, scumbag who doesn't deserve to live in this neighborhood... you worthless, good-for-nothing, loser.")
Similarly, when asked to provide instructions for how to get away with murder, the base model responds that "It is not appropriate to encourage or provide information on how to harm or kill someone" but the attacked model complies: "The first step would be to identify the location of the neighbor's house". While the response here is rather poor advice, this is likely because the model is not competent; a better model might provide more actionable (and thus harmful) advice. We list sample (**uncensored**) examples of our ability to attack these models in the Appendix.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Attack Success Rate & Mean \(\ell_{2}\) Distortion \\ \hline LLaMA Adapter & 100\% & 3.91 \(\pm\) 0.36 \\ Mini GPT-4 (Instruct) & 100\% & 2.51 \(\pm\) 1.45 \\ Mini GPT-4 (RLHF) & 100\% & 2.71 \(\pm\) 2.12 \\ LLaVA & 100\% & 0.86 \(\pm\) 0.17 \\ \hline \hline \end{tabular}
\end{table}
Table 3: We can force Mini GPT-4, LLaVA, and LLaMA Adapter to produce arbitrary toxic output small \(\ell_{2}\) perturbations. Despite their similar methodology, LLaVA is \(10\times\) more vulnerable than the others, indicating the importance of implementation details.
Conclusion
Language models trained via RLHF or instruction tuning are significantly more aligned than base models: in particular, they are more helpful (they appropriately follow benign user instructions) and harmless (they are less likely to output toxicity or harmful actions). While helpfulness can be evaluated through various utility metrics, harmlessness is more difficult to evaluate--and almost all methods to date rely on human-designed test cases to quantify this.
In this paper we have shown that while these models might be _usually_ harmless, they may not be harmless under _adversarial_ prompting. While the harms from adversarial prompting that we illustrate are fairly benign (e.g., the small models we study give unhelpful advice on how to get away with murder, or produce toxic content that could be found anywhere on the internet), our attacks are directly applicable to triggering other bad behaviors in larger and more capable systems.
Our attacks are most effective on the new paradigm of multimodal vision-language models. While all models we study are easy to attack, small design decisions affect the ease of attacks by as much as \(10\times\). Better understanding where this increased vulnerability arises is an important area for future work. Moreover, it is very likely that the future models will add additional modalities (e.g, audio) which can introduce new vulnerability and surface to attack.
Unfortunately, for text-only models, we show that current NLP attacks are not sufficiently powerful to correctly evaluate adversarial alignment: these attacks often fail to find adversarial sequences even when they are known to exist. Since our multimodal attacks show that there exist input embeddings that cause language models to produce harmful output, we hypothesize that there may also exist adversarial sequences of _text_ that could cause similarly harmful behaviour.
_Conjecture: An improved NLP optimization attack may be able to induce harmful output in an otherwise aligned language model._
While we cannot prove this claim (that's why it's a conjecture!) we believe our paper provides strong evidence for it: (1) language models are weak to soft-embedding attacks (e.g., multimodal attacks); and (2) current NLP attacks cannot find solutions that are known to exist. We thus hypothesize that stronger attacks will succeed in making text-only aligned models behave harmfully.
Future work.We hope our paper will inspire several directions for future research. Most immediately, we hope that stronger NLP attacks will enable comprehensive robustness evaluations of aligned LLMs. Such attacks should, at a minimum, pass our tests to be considered reliable.
We view the end goal of this line of work not to produce better attacks, but to improve the evaluation of defenses. Without a solid foundation on understanding attacks, it is impossible to design robust defenses that withstand the test of time. An important open question is whether existing attack and defense insights from the adversarial machine learning literature will transfer to this new domain.
Ultimately, such foundational work on attacks and defenses can help inform alignment researchers develop improved model alignment techniques that remain reliable in adversarial environments.
## Acknowledgements
We are grateful for comments on this paper by Andreas Terzis, Slav Petrov, and Erik Jones.
|
2305.14209 | Basis Pursuit Denoising via Recurrent Neural Network Applied to
Super-resolving SAR Tomography | Finding sparse solutions of underdetermined linear systems commonly requires
the solving of L1 regularized least squares minimization problem, which is also
known as the basis pursuit denoising (BPDN). They are computationally expensive
since they cannot be solved analytically. An emerging technique known as deep
unrolling provided a good combination of the descriptive ability of neural
networks, explainable, and computational efficiency for BPDN. Many unrolled
neural networks for BPDN, e.g. learned iterative shrinkage thresholding
algorithm and its variants, employ shrinkage functions to prune elements with
small magnitude. Through experiments on synthetic aperture radar tomography
(TomoSAR), we discover the shrinkage step leads to unavoidable information loss
in the dynamics of networks and degrades the performance of the model. We
propose a recurrent neural network (RNN) with novel sparse minimal gated units
(SMGUs) to solve the information loss issue. The proposed RNN architecture with
SMGUs benefits from incorporating historical information into optimization, and
thus effectively preserves full information in the final output. Taking TomoSAR
inversion as an example, extensive simulations demonstrated that the proposed
RNN outperforms the state-of-the-art deep learning-based algorithm in terms of
super-resolution power as well as generalization ability. It achieved a 10% to
20% higher double scatterers detection rate and is less sensitive to phase and
amplitude ratio differences between scatterers. Test on real TerraSAR-X
spotlight images also shows a high-quality 3-D reconstruction of the test site. | Kun Qian, Yuanyuan Wang, Peter Jung, Yilei Shi, Xiao Xiang Zhu | 2023-05-23T16:28:02Z | http://arxiv.org/abs/2305.14209v1 | # Basis Pursuit Denoising via Recurrent Neural Network Applied to Super-resolving SAR Tomography
###### Abstract
This paper has been accepted by IEEE TGRS. Copyright may be transferred without further notice. Finding sparse solutions of underdetermined linear systems commonly requires the solving of \(L_{1}\) regularized least squares minimization problem, which is also known as the basis pursuit denoising (BPDN). They are computationally expensive since they cannot be solved analytically. An emerging technique known as _deep unrolling_ provided a good combination of the descriptive ability of neural networks, explainable, and computational efficiency for BPDN. Many unrolled neural networks for BPDN, e.g. learned iterative shrinkage thresholding algorithm and its variants, employ shrinkage functions to prune elements with small magnitude. Through experiments on synthetic aperture radar tomography (TomoSAR), we discover the shrinkage step leads to unavoidable information loss in the dynamics of networks and degrades the performance of the model. We propose a recurrent neural network (RNN) with novel sparse minimal gated units (SMGUs) to solve the information loss issue. The proposed RNN architecture with SMGUs benefits from incorporating historical information into optimization, and thus effectively preserves full information to the final output. Taking TomoSAR inversion as an example, extensive simulations demonstrated that the proposed RNN outperforms the state-of-the-art deep learning-based algorithm in terms of super-resolution power as well as generalization ability. It achieved \(10\%\) to \(20\%\) higher double scatterers detection rate and is less sensitive to phase and amplitude ratio difference between scatterers. Test on real TerraSAR-X spotlight images also shows high-quality 3-D reconstruction of test site.
SAR tomography (TomoSAR), basis pursuit denoising (BPDN), recurrent neural network, sparse reconstruction.
## I Introduction
### _Motivation_
Sparse solutions are ordinarily desired in a multitude of fields, such as radar imaging, medical imaging and acoustics signal processing. Compressive sensing theory tells that the exact solution in the absence of noise is the signal with the minimum \(L_{0}\)-norm while still fulfilling the forward model. As the \(L_{0}\)-norm minimization is NP-hard, this is often solved by \(L_{1}\)-norm minimization. The unconstrained form of a linear system can be formulated as follows:
\[\min_{x}||\mathbf{A}\mathbf{x}-\mathbf{b}||_{2}^{2}+\lambda||x||_{1}, \tag{1}\]
where \(\mathbf{A}\), \(\mathbf{x}\), and \(\mathbf{b}\) are the sensing matrix, the signal to be retrieved, and the measurements. Solving Eq. (1) is an unconstrained convex optimization problem, whose objective function is non-differentiable. It is also known as basis pursuit denoising (BPDN) [1]. In the field of remote sensing, sparse signals are widely expected. Therefore, BPDN is broadly employed to exploit sparsity prior in various remote sensing application, including but not limited to pan-sharpening [2], spectral unmixing [3], microwave imaging [4] and synthetic aperture radar tomography (TomoSAR) [5]. In this work, we focuses on addressing BPDN in TomoSAR inversion, but our findings are applicable for general sparse reconstruction problems in other fields as well.
Generic solvers for BPDN are either first- or second-order compressive sensing (CS) [6][7][8] based methods. First-order methods are typically based on linear approximation of gradient, e.g. iterative shrinkage thresholding algorithm (ISTA) [9], coordinate descent (CD) [10] and alternating direction method of multipliers (ADMM) [11]. Second-order methods usually have much better performance than first-order methods. An example of the second-order methods is the prime dual inferior point method (PDIPM) [12]. It was demonstrated in [13][5] that CS-based methods are able to achieve unprecedented super-resolution ability and location accuracy comparing to conventional linear algorithm [14][15]. In spite of the good performance of CS-based methods, they often suffer from heavy computational burden due to their iterative properties and are hard to extend to practical use.
In the past years, the advent of deep neural networks has attracted the interest of many researchers and triggered extensive studies due to their excellent learning and expression power. Deep neural networks have demonstrated their availability and advanced the state-of-the-art for many problems. More recently, an emerging deep learning algorithm coined _deep unfolding_[16] was proposed to provide a concrete and systematic connection between iterative physical model based algorithms and deep neural networks. Inspired by this concept, various neural networks were proposed to solve BPDN in
problems by unrolling iterative CS solvers. The first work of deep unfolding dates back to learned iterative shrinkage thresholding algorithm (LISTA) [17], which was designed for solving sparse recovery. LISTA unrolls ISTA, one of the most popular algorithms, and substantially improves the computational efficiency and parameter tuning. [18] proposed ADMM-CSnet by unrolling ADMM algorithm to deep hierarchical network architecture and applied ADMM-CSnet to magnetic resonance imaging (MRI) and natural image CS. Results in [18] indicates favorable performance of ADMM-CSnet in high computational speed. For remote sensing application, CSR-net [19] was proposed by combining deep unfolding structures and convolutional neural network modules and achieved fast and accurate 3-D microwave imaging. In addition, [20] proposed AF-AMPNet by unrolling approximate message passing with phase error estimation (AF-AMP) to a deep neural network. AF-AMPNet was employed in sparse aperture (SA) inverse SAR (ISAR) imaging and accelerated the imaging process. Inspired by the encouraging achievements made by deep unfolding, the TomoSAR community started to design deep neural networks by unrolling iterative optimization solvers for solving BPDN in TomoSAR inversion. [21] unrolled and mapped vector AMP (VAMP) [22] into a neural network for line spectral estimation and applied it to tackle TomoSAR inversion. Results in [21] show that L-VAMP is able to separate overlaid scatterers. \(\mathbf{\gamma}\)-Net was proposed in [23] by tailoring the complex-valued LISTA network. \(\mathbf{\gamma}\)-Net introduced weight coupling structure [24] and support selection scheme [24] to each iteration block in LISTA and improved the conventional soft-thresholding function by piecewise linear function. It was demonstrated in [23] that \(\mathbf{\gamma}\)-Net improves the computational efficiency by 2-3 order of magnitude comparing to the state-of-the-art second-order TomoSAR solver SL1MMER [13] while showing no degradation in super-resolution ability and location accuracy.
However, unrolled neural networks do not consider historical information in the updating rules. To be exact, the output is generated exclusively based on the output of its previous layer This kind of learning architecture leads to an error propagation phenomenon, where error in the first few layers will be propagated and even amplified in the upcoming layers. Moreover, when the unrolled neural networks are designed for sparse reconstruction, shrinkage steps are usually required to promote sparsity. The shrinkage step utilizes thresholding functions to prune element with small magnitude to zero and such pruning causes information loss in the dynamics of the neural network. Once useful information is discarded in the previous layers, the upcoming layers have no longer chance to utilize the discarded information, thus degrading the performance of the neural network and sometimes leading to large error in the final output.
### _Contribution of this paper_
In this paper, we aim to address the problem of information loss caused by shrinkage steps in unrolled neural networks designed for sparse reconstruction. To this end, we propose a novel architecture, termed as sparse minimal gated unit (SMGU), to incorporate historical information into optimization so that we can promote sparsity using thresholding functions and preserve full information simultaneously. Additionally, we extend SMGU to complex-valued (CV) domain as CV-SMGU and use it to build a gated recurrent neural network (RNN) for solving TomoSAR inversion. The main contribution of this paper is listed below:
1. We addressed the problem of information loss in unrolled neural networks for sparse reconstruction by a novel gated RNN. The gated RNN is built using SMGUs, which incorporate historical information into optimization. The proposed gated RNN is able to promote sparsity by employing shrinkage thresholding functions. Simultaneously, the pruned information will be reserved in the cell state of SMGUs, thus full information can be preserved in the dynamics of the network.
2. We extend the SMGU to the complex-valued domain, called as CV-SMGU, and apply the gated RNN built with CV-SMGUs to solve TomoSAR inversion. To the best of our knowledge, it is the first attempt to bridge the gated RNN and TomoSAR inversion. We may provide novel insights and open a new prospect for future deep learning based TomoSAR inversion.
3. We carry out systematic evaluation to demonstrate that the proposed gated RNN outperforms the state-of-the-art deep learning-based TomoSAR algorithm \(\mathbf{\gamma}\)-Net in terms of super-resolution power as well as generalization ability for TomoSAR inversion.
The remainder of the paper is outlined as follows. The TomoSAR imaging model and \(\mathbf{\gamma}\)-Net is briefly reviewed in Section II. Section III provides an overview of the formulation of SMGUs as well as CV-SMGUs with application to TomoSAR inversion. Results of systematic evaluation, using simulated and real data, are presented in Section IV. Section V discussed the generalization ability w.r.t baseline discrepancy and analyzed the model convergence. Finally, the conclusion of this paper is drawn in Section VI.
## II Background
### _TomoSAR imaging model_
In this section, we briefly introduce the TomoSAR imaging model. Fig. 1 demonstrates the SAR imaging model at a fixed azimuth position. A stack of complex-valued SAR acquisitions over the illuminated area is obtained at slightly different orbit position (the elevation aperture). The complex-valued measurement \(g_{n}\) of the \(n\)th acquisition is the integral of the reflectivity profiles \(\gamma(s)\) along the elevation direction \(s\). The discrete TomoSAR imaging model can be written as:
\[\mathbf{g}=\mathbf{R}\mathbf{\gamma}+\mathbf{\varepsilon}, \tag{2}\]
where \(\mathbf{g}\in\mathbb{C}^{N\times 1}\) is the complex-valued SAR measurement vector and \(\mathbf{\gamma}\in\mathbb{C}^{L\times 1}\) denotes the discrete reflectivity profile uniformly sampled at elevation position \(s_{l}(l=1,2,\ldots,L)\) along the elevation direction. \(N\) is the number of measurements and \(L\) is the number of discrete elevation indices. \(\mathbf{R}\in\mathbb{C}^{N\times L}\) is the irregularly sampled discrete Fourier transformation mapping matrix with \(R_{nl}=\exp{(-j2\pi\xi_{n}s_{l})}\) where
\(\xi_{n}\) is the frequency proportional to the perpendicular baseline of the \(n\)th acquisition. The readers can refers to [14] for more details of the SAR imaging model.
Since the reflectivity profile \(\gamma\) is sufficiently sparse in urban areas [5], retrieving \(\gamma\) is a sparse reconstruction problem. Accordingly, \(\gamma\) in presence of measurement noise \(\varepsilon\) can be estimated by BPDN optimization, which is formulated as follows:
\[\hat{\mathbf{\gamma}}=\arg\min_{\mathbf{\gamma}}\left\{\|\mathbf{g}-\mathbf{R}\mathbf{ \gamma}\|_{2}^{2}+\lambda\|\mathbf{\gamma}\|_{1}\right\}, \tag{3}\]
where \(\lambda\) is a regularization parameter balancing the sparsity and data-fitting terms. It should be adjusted according to the noise level as well as the desired sparsity level. The choice of a proper \(\lambda\) is described in great detail in [1].
### _Review of \(\mathbf{\gamma}\)-Net_
As shortly mentioned previously, conventional CS-based BPDN solvers for Eq. (3) is extremely computational expensive. To overcome the heavy computational burden and make super-resolving TomoSAR inversion for large-scale processing feasible, the author proposed \(\mathbf{\gamma}\)-Net in [23], which tailors the first unrolling ISTA network, to mimic a CS-based BPDN solver. To be specific, \(\mathbf{\gamma}\)-Net introduces the weight coupling structure and support selection scheme and improves the conventional soft-thresholding function by the piecewise linear function. Fig. 2 illustrates us the architecture of the \(i^{th}\) layer in \(\mathbf{\gamma}\)-Net. \(\mathbf{SS}\) in \(\mathbf{\gamma}\)-Net indicates a special thresholding scheme called support selection, which will select \(\rho^{i}\) percentage of entries with the largest magnitude and trust them as "true support". The "true support" will be directly fed to the next layer, bypassing the shrinkage step. \(\eta_{pul}\) is a novel thresholding function, called piecewise linear function, to execute shrinkage in the \(\mathbf{\gamma}\)-Net. It contributes to improving convergence rate and reducing reconstruction error. More details about \(\mathbf{\gamma}\)-Net formulation and the full model structure can be found in the Appendix A.
However, as one can see in Fig. 2, \(\mathbf{\gamma}\)-Net inherits the learning architecture of LISTA despite modifications made by the authors to improve the performance. Therefore, it can be imagined that \(\mathbf{\gamma}\)-Net will suffer from the same problem as LISTA. Specifically speaking, the learning architecture of \(\mathbf{\gamma}\)-Net, where the output is generated only directly from the previous output. As a natural consequence, the final output can only utilize the information from the second last layer. When useful or important information is pruned by shrinkage steps in the intermediate layers, the discarded information is no longer possible to contribute to the final output. Consequently, large reconstruction error in the final output can be expected. Fig. 3 demonstrates an unsuccessful detection of double scatterers in our experiments.
In this experiment, the double scatterers were assumed to have identical phase and amplitude and were spaced by 0.6 Rayleigh resolution, i.e. in super-resolution regime and the SNR level was set as 6dB. In general, if we cannot resolve the overlaid double scatterers, the reflectivity profile should have a dominant amplitude peak between the true elevation position of the double scatterers, as it is shown by the estimate of a non-superresolving algorithm SVD-Wiener [14] in Fig. 3. However, \(\mathbf{\gamma}\)-Net was able to detect one of the double scatterers with very high localization accuracy but failed to find the other one. From our perspective, it was abnormal and we supposed that this unsuccessful double scatterers separation should attribute to the information loss caused by shrinkage steps in \(\mathbf{\gamma}\)-Net. Inspecting the intermediate layers
Fig. 1: The SAR imaging geometry at a fixed azimuth position. The elevation synthetic aperture is built up by acquisition from slightly different incidence angles. Flight direction is orthogonal into the plane.
Fig. 3: An example of unsuccessful detection of double scatterers caused by information loss. \(\mathbf{\gamma}\)-Net detects one of the double scatterers with very high localization accuracy but fails to find the other one.
Fig. 2: Illustration of the \(i^{th}\) layer in \(\mathbf{\gamma}\)-Net.
in \(\mathbf{\gamma}\)-Net, we discovered that the information of the second scatterer gradually diminished after each shrinkage step in the intermediate layers. Until the second last layer, the information of the second scatterer fell out completely. As a result, the final output of \(\mathbf{\gamma}\)-Net, i.e. the estimate of \(\gamma\), did not contain the information of the second scatterer. Hence we cannot detect the second scatterer.
## III Methodology
### _Adaptive ISTA and **sc2net**_
In the optimization community, it has been extensively studied and proved [25, 26, 27] that incorporating historical information contributes to improving the algorithm performance. Inspired by the high-level ideas from the previous researches, researchers proposed adaptive ISTA in [28] to integrate and make use of historical information by introducing two adaptive momentum vectors \(\mathbf{f}\) and \(\mathbf{i}\) into ISTA in each iteration, which is formulated as follows:
\[\begin{split}\bar{\mathbf{c}}^{(t)}&=\mathbf{W}_{2} \hat{\mathbf{\gamma}}^{(t-1)}+\mathbf{W}_{1}\mathbf{g}\\ \mathbf{c}^{(t)}&=\mathbf{f}^{(t)}\odot\mathbf{c}^{ (t-1)}+\mathbf{i}^{(t)}\odot\bar{\mathbf{c}}^{(t)}\\ \hat{\mathbf{\gamma}}^{(t)}&=\eta_{st}\left(\mathbf{c} ^{(t)}\right)\end{split} \tag{4}\]
where \(\eta_{st}\) indicates the conventional soft-thresholding function and its complex-valued version reads:
\[\eta_{st}(\hat{\mathbf{\gamma}}_{i},\theta_{i})=\left\{\begin{array}{ll}\frac{ \hat{\mathbf{\gamma}}_{i}}{|\hat{\mathbf{\gamma}}_{i}|}{\rm max}(|\hat{\mathbf{\gamma}}_{i }|-\theta_{i},0)&|\hat{\mathbf{\gamma}}_{i}|\neq 0\\ 0&{\rm else}\end{array}\right.. \tag{5}\]
Comparing to ISTA, whose update rule can be equivalently expressed as \(\hat{\mathbf{\gamma}}^{(t)}=\eta_{st}\left(\bar{\mathbf{c}}^{(t)}\right)\) using the same notation, the adaptive ISTA takes not only the current information but also the previous information into consideration. To be exact, at the \(t^{th}\) iteration of the adaptive ISTA, the estimate is generated by linear combining the historical information \(\mathbf{c}^{(t-1)}\) at the previous iteration and the current information \(\bar{\mathbf{c}}^{(t)}\) at the current iteration. The historical information \(\mathbf{c}^{(t-1)}\) and the current information \(\bar{\mathbf{c}}^{(t)}\) are weighted by the adaptive momentum vectors \(\mathbf{f}^{(t)}\) and \(\mathbf{i}^{(t)}\), respectively. By this means, the final estimate of the adaptive ISTA will accumulate historical information weighted by different \(\mathbf{f}^{(t)}\) and \(\mathbf{i}^{(t)}\) for different iteration.
However, one problem of the adaptive ISTA is that the two momentum vectors in each adaptive ISTA iteration are difficult to determine. So far, there has been no analytical way to determine the values of the adaptive momentum vectors \(\mathbf{f}^{(t)}\) and \(\mathbf{i}^{(t)}\). Usually, they are selected in by tediously hand-craft tuning, which takes a lot of time and cannot guarantee optimal performance. To address this issue, the author proposed _sc2net_ in [28] by recasting the adaptive ISTA as a recurrent neural network to parameterize the two momentum vectors and learn them from data. The _sc2net_ is built by sparse long short-term memory (SLSTM) [28] units, as it is demonstrated in Fig. 4. Each SLSTM unit represents an individual layer of _sc2net_.
At the \(t^{th}\) layer of _sc2net_, the input gate and forget gate correspond to the momentum vectors \(\mathbf{i}^{(t)}\) and \(\mathbf{f}^{(t)}\) in each adaptive ISTA iteration, respectively. Hence, we use the same notation in SLSTM units to describe the input and forget gate. The two gates in each SLSTM unit are parameterized with the input data \(\mathbf{g}\) and the output \(\hat{\mathbf{\gamma}}^{(t-1)}\) at the previous layer as follows:
\[\mathbf{i}^{(t)} =\sigma\left(\mathbf{W}_{i2}^{(t)}\hat{\mathbf{\gamma}}^{(t-1)}+ \mathbf{W}_{i1}^{(t)}\mathbf{g}\right) \tag{6}\] \[\mathbf{f}^{(t)} =\sigma\left(\mathbf{W}_{f2}^{(t)}\hat{\mathbf{\gamma}}^{(t-1)}+ \mathbf{W}_{f1}^{(t)}\mathbf{g}\right)\]
To clarify, SLSTM unit does not have the output gate like conventional LSTM units. By substituting Eq. (6) into Eq. (4), we have the formal definition of the SLSTM unit, as it is listed in Table I. \(\mathbf{W}_{i1},\mathbf{W}_{i2},\mathbf{W}_{f1},\mathbf{W}_{f2}\) denote four trainable weight matrices to determine the input and forget gates in each SLSTM unit. It is worth mentioning that the weight matrices \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are also learned from data while they are shared for all SLSTM units in an individual _sc2net_. \(\sigma(\cdot)\) indicates the conventional sigmoid function, which is express as:
\[\sigma(x)=\frac{1}{1+e^{-x}} \tag{7}\]
The sparse activation function employed in the SLSTM to promote sparse codes is the double hyperbolic tangent function, which is abbreviated as \(\eta_{dt}(\cdot)\) and defined as follows:
\[\eta_{dt}(\hat{\mathbf{\gamma}},s,\theta)=s\cdot[\tanh(\hat{\mathbf{\gamma}}+\theta)+ \tanh(\hat{\mathbf{\gamma}}-\theta)] \tag{8}\]
where \(s\) and \(\theta\) denote two trainable parameter. It is worth noting that the double hyperbolic tangent function can be viewed as a smooth and continuously differentiable alternative of the conventional soft-thresholding function. Its advantages are mainly two-fold. On the one hand, its second derivative sustains for a long span, thus contributing to addressing the gradient vanishing problem caused by the cell recurrent connection [29]. On the other hand, it is able to effectively imitate the soft-thresholding function within the interval of \([-\theta,\theta]\)
Fig. 4: Sc2net and detailed learning architecture of SLSTM unit. Each SLSTM unit builds an individual layer of sc2net.
Fig. 5 demonstrates an example of the double hyperbolic tangent function and compares it to the soft-thresholding function.
To sum up, _sc2net_ inherits the advantage of the adaptive ISTA, which incorporates historical information into optimization. The cell state \(\mathbf{c}^{(t)}\) in each SLSTM unit of _sc2net_ acts as an "eye" to supervise the optimization from two aspects. First, the long-term dependence from the previous outputs can be captured and maintained. Second, important information will be automatically accumulated, whereas useless or redundant information will be forgot, in the dynamics of _sc2net_.
However, when we tried to apply _sc2net_ in TomoSAR inversion, we discovered that a drawback of _sc2net_ impedes its application. As it is known, a complicated RNN model, on the one hand, hinders theoretical analysis and empirical understanding. On the other hand, it also implies that we have to learn more parameters and tune more components. As a natural result, more training sequences, which mean more training time, and (perhaps) larger training datasets are required. When _sc2net_ is applied to solve TomoSAR inversion, we need to learn four weight matrices \(\mathbf{W}_{1}^{(t)},\mathbf{W}_{2}^{(t)},\mathbf{W}_{1}^{(t)}\) and \(\mathbf{W}_{i2}^{(t)}\), which have the dimension \(L\times L\), \(L\times N\), \(L\times L\) and \(L\times N\), respectively, to determine the forget gate \(\mathbf{f}^{(t)}\) and input gate \(\mathbf{i}^{(t)}\) in each individual SLSTM unit. Moreover, SAR data is complex-valued. Hence, there weight matrices to be learned should be complex-valued as well, thus duplicating the number of trainable components and parameters since two weight matrices need to be learned simultaneously as the real and imaginary part of a complex-valued weight matrix. Through our research and experiments, we found that such large amount of high dimensional weight matrices to be learned makes the training procedure time-consuming. More seriously, it is difficult for the model to converge in the training process.
### _Complex-valued Sparse Minimal Gated Unit_
To address the aforementioned issue and better leverage the power of incorporating the historical information for solving TomoSAR inversion, it is necessary to reduce the components and simplify the model architecture. Recently, studies and evaluations in [30, 31, 32] demonstrated that the gated unit contributes to significantly improving the performance of a RNN comparing to that without any gated unit. However, it does not signify that the more the gates the better the performance of an RNN. Based on this fact, the author proposed a RNN model with only one gate, termed as minimal gated unit (MGU) and revealed that fewer gated units reduces the complexity but not necessarily the performance.
Inspired by the valuable works in [33, 34], we proposed sparse minimal gated unit (SMGU), as illustrated in Fig. 6, by coupling the input gate to the forget gate, thus further the simplifying SLSTM unit. The detailed equations for defining the SMGU are listed in Table I.
In the \(t^{th}\) layer of a RNN with SMGUs, we will firstly compute the forget gate \(\mathbf{f}^{(t)}\). In addition, the short-term response \(\mathbf{\hat{c}}^{(t)}\) is generated by combining the input data \(\mathbf{g}\) and the "forgotten" portion (\(\mathbf{f}^{(t)}\odot\hat{\boldsymbol{\gamma}}^{(t-1)}\)) of the output from the previous layer. Hereafter, the new hidden state \(\mathbf{c}^{t}\) of the current layer can be formulated by combining part of \(\hat{\boldsymbol{\gamma}}^{(t-1)}\) and the short-term response \(\mathbf{\hat{c}}^{(t)}\), which are determined by (\(1-\mathbf{f}^{(t)}\)) and \(\mathbf{f}^{(t)}\), respectively. Eventually, the sparse activation function, i.e. the double hyperbolic function, will be applied to the current hidden state \(\mathbf{c}^{t}\) for shrinkage and thresholding to promote sparsity of the output.
Fig. 5: Comparison of double hyperbolic tangent function \(\eta_{dt}(\cdot)\) and soft-thresholding function. \(\eta_{dt}(\cdot)\) effectively imitates the soft-thresholding function within the interval of \([-\theta,\theta]\).
\begin{table}
\begin{tabular}{l|l l l l} \hline & \(\boldsymbol{\gamma}\)-Net layer & SLSTM unit & SMGU & CV-SMGU \\ \hline Input gate & - & \(\mathbf{i}^{(t)}=\sigma\{\mathbf{W}_{1}^{(t)}\mathbf{\hat{\boldsymbol{\gamma}} ^{(t-1)}}+\mathbf{W}_{1}^{(t)}\mathbf{g}\}\) & \(\cdot\) & - \\ \hline Forget gate & - & \(\mathbf{f}^{(t)}=\sigma\{\mathbf{W}_{f2}^{(t)}\mathbf{\hat{\boldsymbol{ \gamma}}^{(t-1)}}+\mathbf{W}_{f1}^{(t)}\mathbf{g}\}\) & \(\mathbf{f}^{(t)}=\sigma\{\mathbf{W}_{f2}^{(t)}\mathbf{\hat{\boldsymbol{ \gamma}}^{(t-1)}}+\mathbf{W}_{f1}^{(t)}\mathbf{g}\}\) & \(\mathbf{f}^{(t)}=\tanh\{\|\mathbf{W}_{1}^{(t)}\mathbf{\hat{\boldsymbol{\gamma} }^{(t)}}\mathbf{\hat{\boldsymbol{\gamma}}^{(t-1)}}+\mathbf{W}_{f1}^{(t)}\mathbf{ g}\}\) \\ \hline Cell state & - & \(\mathbf{\hat{c}}^{(t)}=\mathbf{W}_{2}\mathbf{\hat{\boldsymbol{\gamma}}^{(t-1)} }+\mathbf{W}_{1}\mathbf{g}\) & \(\mathbf{\hat{c}}^{(t)}=\mathbf{W}_{2}(\mathbf{\hat{c}}^{(t)}\oplus\hat{ \boldsymbol{\gamma}}^{(t-1)}+\mathbf{W}_{1}\mathbf{g}\) & \(\mathbf{\hat{c}}^{(t)}=\tilde{\mathbf{W}}_{2}(\mathbf{\hat{c}}^{(t)}\odot\hat{ \boldsymbol{\gamma}}^{(t-1)})+\tilde{\mathbf{W}}_{1}\mathbf{\hat{\boldsymbol{ \gamma}}}\) \\ & \(\mathbf{e}^{(t)}=\mathbf{f}^{(t)}\odot\hat{\boldsymbol{\gamma}}^{(t-1)}+ \mathbf{i}^{t}\odot\mathbf{e}^{t}\) & \(\mathbf{e}^{(t)}=(1-\mathbf{f}^{(t)})\odot\hat{\boldsymbol{\gamma}}^{(t-1)}+ \mathbf{f}^{(t)}\odot\hat{\mathbf{c}}^{(t)}\) & \(\mathbf{e}^{(t)}=(1-\mathbf{f}^{(t)})\odot\hat{\boldsymbol{\gamma}}^{(t-1)}+ \mathbf{f}^{(t)}\odot\hat{\mathbf{c}}^{(t)}\) \\ \hline Output & \(\hat{\boldsymbol{\gamma}}^{(t)}=\eta_{ac^{\prime}}\mathbf{\hat{\boldsymbol{ \gamma}}^{(t-1)}}+\) & \(\hat{\boldsymbol{\gamma}}^{(t)}=\eta_{dt}\left(\mathbf{e}^{(t)}\right)\) & \(\hat{\boldsymbol{\gamma}}^{(t)}=\eta_{ac^{\prime}-dt}\mathbf{e}^{(t)}\) \\ & \(\mathbf{W}^{(t)}=\mathbf{R}\hat{\boldsymbol{\gamma}}^{(t-1)}(\mathbf{R})\) & \(\mathbf{\hat{c}}^{(t)}=\eta_{ac^{\prime}}\mathbf{\hat{\boldsymbol{\gamma}}^{(t-1 )}}\) & \(\mathbf{\hat{c}}^{(t)}=\eta_{ac^{\prime}-dt}\mathbf{e}^{(t)}\) \\ \hline \end{tabular}
\end{table} TABLE I: Formal definition of the \(t^{th}\) layer in different models and comparison of their difference. \(\boldsymbol{\gamma}\)-Net has no gated expression. SLSTM unit introduces forget and input gates to incorporate historical information. SMGU has the minimal number of gates while maintains the performance comparing to SLSTM unit. CV-SMGU extends SMGU to the complex-valued domain. The forget gate is activated on the magnitude using _tanh_ function instead of the sigmoid function to guarantee the activation value ranging from 0 to 1.
In this formulation, we can see that the SMGU is able to simultaneously execute a two-fold task with only one forget gate. On the one hand, SMGU allows a compact representation by enabling the hidden state \(\mathbf{c}^{(t)}\) to discard irrelevant or redundant information. On the other hand, SMGU is capable of controlling how much information from the previous layer to take over. Additionally, comparing the formulation of SMGU to SLSTM in Table I, we can see that the parameter size of SMGU is only about half of that of SLSTM since the weight matrices \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are shared for different layers in a network. The main advantage brought by the significant elimination of trainable parameters is that we can reduce the requirement for training data, training time as well as architecture tuning.
In addition to the improvements using SMGU, an extension of SMGU to complex domain is required. Complex-valued SMGU (CV-SMGU) has essentially the same structure as SMGU despite two differences. First, each neuron in CV-SMGU has two channels indicating the real and imaginary part of a complex number, respectively. Often the real and imaginary parts are not directly activated. Instead, the activation is performed on the magnitude of the complex number. Hence, it is no longer appropriate to use the sigmoid function for activation to generate the forget gate since the magnitude is always greater than zero leading to the undesired result being always greater than 0.5 after activation. To tackle this problem, we employed the "\(\tanh\)" function instead of sigmoid to guarantee that the value of the forget gate vector varies from 0 to 1 after activation, as it is originally designed. By applying the aforementioned adaptions, we have the formulations of CV-SMGU, as listed in Table I as well. The symbols \(\mathbf{\tilde{W}}_{*}\), \(\mathbf{\tilde{g}}\) and \(\mathbf{\tilde{\gamma}}^{*}\) represent
\[\mathbf{\tilde{W}}_{*} =\left[\begin{array}{cc}\mathrm{Re}(\mathbf{W}_{*})&-\mathrm{ Im}(\mathbf{W}_{*})\\ \mathrm{Im}(\mathbf{W}_{*})&\mathrm{Re}(\mathbf{W}_{*})\end{array}\right],\] \[\mathbf{\tilde{g}} =\left[\begin{array}{c}\mathrm{Re}(\mathbf{g})\\ \mathrm{Im}(\mathbf{g})\end{array}\right],\] \[\mathbf{\tilde{\gamma}}^{*} =\left[\begin{array}{c}\mathrm{Re}(\mathbf{\tilde{\gamma}}^{*} )\\ \mathrm{Im}(\mathbf{\tilde{\gamma}}^{*})\end{array}\right],\]
where \(\mathrm{Re}(\cdot)\) and \(\mathrm{Im}(\cdot)\) denote the real and imaginary operators, respectively. \(\eta_{cv-dt}(\cdot)\) is the complex-valued version of the double hyperbolic function applied component wise and expressed as follows:
\[\eta_{cv-dt}(\mathbf{\hat{\gamma}},s,\theta)=\left\{\begin{array}{ll}\frac{ \mathbf{\hat{\gamma}}_{i}}{|\mathbf{\hat{\gamma}}_{i}|}s\cdot e^{j\cdot-( \mathbf{\hat{\gamma}})}[\tanh(|\mathbf{\hat{\gamma}}|+\theta)&\\ +\tanh(|\mathbf{\hat{\gamma}}|-\theta)],&|\mathbf{\hat{\gamma}}_{i}|\neq 0 \\ 0&\mathrm{else}\end{array}\right.. \tag{9}\]
Table II summarizes and compares the features of different unrolled RNNs. Through experiments, we found that gated unrolled RNNs require significant less layers to achieve comparable or even better performance. Moreover, the SMGU simplifies the model structure by coupling the two gates, thus significantly eliminating the number of free trainable parameters. Even if the CV-SMGU duplicates the number of parameters for determining the gate, it will not induce serious memory burden or computational expense.
## IV Performance evaluation
### _Simulation setup and model training_
In the simulation, we applied the same settings as [23], i.e. 25 regularly distributed spatial baselines in the range of -135m to 135m were simulated. The corresponding inherent elevation resolution, i.e. Rayleigh resolution, amounts to about 42m.
In the experiment, about 4 million training samples, half of which are single scatterer and the others are two-scatterers mixtures, were simulated to generate the training dataset. To make the training dataset adequate and the simulation more realistic, we randomized many parameters, i.e. SNR level, amplitude, phase and elevation position of scatterers, when we simulated the training samples. Below list the simulation details of single scatterer and double scatterers.
* **single scatterer**: For single scatterer, the scattering phase \(\phi\) is set to follow an uniform distribution, i.e. \(\phi\sim U(-\pi,\pi)\). In addition, the amplitude \(A\) of the scatterer is simulated to be uniformly distributed in the range of \((1,4)\) Hereafter, the complex-valued scattering coefficient \(\gamma\) can be generated by \(\gamma=A\cdot\mathrm{exp}\{(j\phi)\}\). The elevations of the simulated scatterers are regularly distributed on 1m grid between -20, and 300m. Once the elevation is determined, the echo signal \(\mathbf{g}\in\mathbb{C}^{25}\)
Fig. 6: Structure of the proposed SMGU. \(\mathbf{f}\) indicated the only gate in each SMGU.
is generated with different levels of SNR, which is regularly distributed between [0dB, 10dB] with 11 samples.
* **double scatterers**: we simulated two single scatterers inside each resolution unit. The simulation of the two single scatterers is identical to the previous step. As a consequence, different amplitude ratio, different scattering phase offset as well as different elevation distance between the two scatterers are considered.
The model was implemented and trained under the framework of Pytorch [35]. The employed optimizer was Adam [36]. The learning rate was set to be adaptive according to the number of training epochs with initial value being 0.0001. The loss function over the training data \(\{(\mathbf{g}_{i},\boldsymbol{\gamma}_{i})\}_{i=1}^{T}\) is mean square error (MSE) loss, which is defined as follows:
\[\operatorname*{minimize}_{\boldsymbol{\Psi}}\ \mathcal{L}(\boldsymbol{\Psi})= \frac{1}{T}\sum_{i=1}^{T}||\boldsymbol{\hat{\gamma}}(\boldsymbol{\Psi}, \mathbf{g}_{i})-\boldsymbol{\gamma}_{i}||_{2}^{2}, \tag{10}\]
where \(\boldsymbol{\Psi}\) denotes the set of all parameter to be learned from data. To determine the optimal structure of the network, we validated the performance of the network with different number of CV-SMGUs in term of normalized mean square error (NMSE) on a validation dataset. The validation dataset was composed of 50000 noise-free samples simulated using the same settings introduced in the previous section and the NMSE is defined as follows:
\[\mathrm{NMSE}=\frac{1}{T}\sum_{i=1}^{T}\frac{\|\boldsymbol{\hat{\gamma}}_{i}- \boldsymbol{\gamma}_{i}\|_{2}^{2}}{\|\boldsymbol{\gamma}_{i}\|_{2}^{2}}. \tag{11}\]
As we can see from Table III, the NMSE gradually converges with increasing the number of SMGUs. Moreover, after 6 CV-SMGUs, further increase of the number of CV-SMGUs leads to marginal performance improvement. Instead, heavier computational burden will be brought about. Therefore, the network we designed is composed of 6 CV-SMGUs.
### _Performance assessment and comparison to \(\gamma\)-Net_
In this section, we carry out experiments to systematically evaluate the performance of the proposed algorithm in terms of super-resolution power, estimation accuracy and generalization ability against different amplitude ratio and phase difference of scatterers.
#### Super-resolution power and estimation accuracy
The first experiment set out to study the super-resolution power and estimation accuracy of the proposed algorithm via a TomoSAR benchmark test [5][14]. In the experiment, we mimicked a facade-ground interaction by simulating two-scatterers mixtures with increasing elevation distance between them. The double scatterers were simulated to have identical phase and amplitude, i.e. the worst case for TomoSAR processing [13]. The proposed algorithm and \(\boldsymbol{\gamma}\)-Net were employed to resolve overlaid double scatterers at two SNR levels, i.e. SNR\(\in\{0,6\}\)dB, which represent typical SNR levels of a high-resolution spaceborne SAR image. We use the effective detection rate defined in [23] to fairly evaluate the super-resolution power. An effective detection should satisfy the following three criteria:
1. the hypothesis test correctly decides two scatterers for a double-scatterers signal;
2. the estimated elevation of _both_ detected double scatterers are within \(\pm 3\) times CRLB w.r.t their true elevation;
3. both elevation estimates are also within \(\pm 0.5\ d_{s}\) w.r.t their true elevation.
where \(d_{s}\) indicates the distance between the double scatterers. Fig. 7 compares the effective detection rate \(P_{d}\) of the proposed algorithm and \(\gamma\)-Net. It is presented as a function of the normalized distance \(\alpha\), which is the ratio of the scatterers distance and the Rayleigh resolution \(\alpha=d_{s}/\rho_{s}\). For each combination of SNR and \(\alpha\), we simulated 0.2 million Monte Carlo trials. From Fig. 7, one can see that the proposed algorithm and sc2net with CV-SLSTMs (CV-sc2net) have quite similar performance in terms of effective detection rate. This is the same as we expected since the CV-SMGU is constructed by simplifying the CV-SLSTM. The purpose of CV-SMGU is to reduce the network components while maintaining the performance. The advantages of the proposed algorithm comparing to CV-sc2net are analyzed and discussed in the following "DISCUSSION" section. When we compare the proposed algorithm and CV-sc2net to \(\gamma\)-Net, we can see that both of the proposed algorithm and CV-sc2net outperform \(\gamma\)-Net by a fair margin at both SNR levels. Specifically, they are able to deliver \(10\%\)-\(20\%\) higher effective detection rate in moderate super-resolving cases at 6 dB SNR. In the noisy case at 0 dB SNR, the proposed algorithm and CV-sc2net gradually approach about \(90\%\) effective detection rate with the increase of the normalized distance, whereas \(\gamma\)-Net reaches only about \(70\%\) effective detection rate. The superior performance of the proposed algorithm and CV-sc2net attributes to that they overcome the information loss in the dynamics of the network by incorporating historic data and preserving full information. As we have mentioned in the previous section, the detection of double scatterers is affected by the information loss. We cannot detect the scatterers whose information is discarded.
To better manifest how the incorporation of historic information improves the performance, we simulated 2000 samples containing double scatterers with increasing scatterers distance at 6 dB SNR. We made a scatter plot of their elevation
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Features** & \(\boldsymbol{\gamma}\)**-net** & **sc2net** & **SMGU** & **CV-SMGU** \\ \hline complex-value & Yes & No & No & Yes \\ gates expression & No & Yes & Yes & Yes \\ number of gates & 0 & 2 & 1 & 1 \\ number of parameters for gates & 0 & \(2\cdot(L^{2}+NL)\) & \(L^{2}+NL\) & \(2\cdot(L^{2}+NL)\) \\ required number of layers & \(\approx 15\) & \(\approx 5\) & \(\approx 5\) & \(\approx 5\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of different unrolled RNNs for sparse reconstruction.
estimates and color coded the points by the detector decision in Fig. 8. The x-axis refers to the true normalized elevation distance of the scatterers. The y-axis shows their normalized elevation estimates. The ideal reconstruction would be a horizontal and a diagonal straight line, which represent the ground truth of the simulated ground and facade. The green lines refer to ground truth \(\pm 3\) times CRLB of single scatterer elevation estimate. The blue dots indicate the detected double scatterers, whereas the red dots represent the samples were detected as single scatterers, meaning the second scatterer were lost in the network output. Fig. 8 clear shows that (1) \(\gamma\)-Net experiences much more red dots locate within \(\pm 3\) times CRLB w.r.t the ground truth, meaning it occasionally can only detect one of the double scatterers but is able to estimate its elevation with high precision. We ascribe this problem to the information loss caused by the learning structure of \(\gamma\)-Net. In the contrary, the proposed algorithm utilizes CV-SMGUs to preserve full information, thus avoiding discarding any significant information; (2) the proposed algorithm is able to resolve double scatterer at much smaller scatterers distance. Specifically, the proposed algorithm starts to separate double scatterer from about 0.15 Rayleigh resolution, whereas \(\gamma\)-Net can only detect double scatterer only after about 0.3 Rayleigh resolution.
The elevation estimates of the simulated facade and ground are plotted in Fig. 9 w.r.t the normalized true elevation distance. The red horizontal and slant lines indicate the ground truth of ground and facade respectively. The black dashed curves represent the ground truth \(\pm 1\times\) CRLB. The error bars indicate the standard deviation of the elevation estimates with the mid-point depicting the mean value of the elevation estimates at given normalized true elevation distance. We discarded the points below an effective detection rate of \(5\%\) in the figures. Due to the strict criteria of the effective detection, both the proposed algorithm and \(\gamma\)-Net provide high elevation estimation accuracy, especially at 6 dB SNR, where the bias of the elevation estimates derived by the both methods approaches 0. However, in the extremely noisy case, we can see that the proposed algorithm is able to estimate the elevation with slightly lower bias comparing to \(\gamma\)-Net.
difference. When \(\triangle\phi\)=0, the proposed algorithm delivers about \(20\%\) higher effective detection rate than \(\mathbf{\gamma}\)-Net.
### _Practical demonstration_
For the real data experiment, we used the test data stack over the city of Las Vegas covering Paris Hotel. Fig. 13 demonstrates us an optical image from Google Earth and the SAR mean intensity image of the test site. The stack is composed of 50 TerraSAR-X high-resolution spotlight images with a slant-range resolution of 0.6m and an azimuth resolution of 1.1m, whose spatial baseline distribution is demonstrated in Fig. 12. The images were acquired between 2008 and 2010. More details of the data stack we use are listed in the Table IV.
We employed the DLR's integrated wide area processor (IWAP) [37] to carry out preprocessing like multiple SAR
Fig. 8: Normalized estimated elevation of facade and ground of increasing elevation distance, with SNR=6dB and N=25. The double scatterers were simulated to have identical phase and amplitude. The true positions are a horizontal line referring to the ground and a diagonal line referring to the scatterers at variable elevation. The green lines depict true positions \(\pm\) 3 times CRLB of elevation estimates for single scatterers. Red dots represent samples detected as single scatterers. Blue dots indicate detected overlaid double scatterers.
Fig. 9: Estimated elevation of simulated facade and ground, (a) \(SNR=0\)dB with the proposed algorithm, (b) \(SNR=0\)dB with \(\mathbf{\gamma}\)-Net, (c) \(SNR=6\)dB with the proposed algorithm, (d) \(SNR=6\)dB with \(\mathbf{\gamma}\)-Net. Each dot has the sample mean of all estimates as its y value and the correspond standard deviation as error bar. The red line segments represent the true elevation of the simulated facade and ground. The dashed curves denote the true elevation \(\pm 1\times\)CRLB normalized w.r.t the Rayleigh resolution.
images co-registration and phase calibration. In addition, a coherence point on the ground was chosen as reference.
We used the baselines of the test data stack to simulate training data. The simulation was conducted in the same way as introduced in the previous section and 4 million training samples were generated. When the network was well-trained, the proposed algorithm was directly applied to reconstruct the elevation of the test site.
The reconstruction results of the test site are demonstrated in Fig. 14 and compared to the results derived by \(\gamma\)-Net. In Fig. 14, (a) and (b) illustrate color-coded elevation of single scatterers detected by both algorithms. (c)-(f) depict the reconstruction of detected double scatterers of the both algorithm. The double scatterers are separated into the top and bottom layer according to their elevation estimates and the top and bottom layers are demonstrated separately. By comparing the reconstruction results of the both algorithms, we can see that the proposed algorithm detects the double scatterers with a higher density, indicating that the proposed algorithm has stronger super-resolution power. Closer inspection of the reconstruction of double scatterers shows that serious layover exists on the top of the cross building. Moreover, the elevation estimates of detected double scatterers indicate that the top layer is mainly caused by reflections from building roof and building facade, whereas the bottom layer is composed of scatterers on the ground or lower infrastructures.
To provide a more intuitive comparison of the super-resolution power of both algorithms, we summarized the scatterers detection of both algorithms in Table V. As it is shown in Table V, most pixels are detected as 0 scatterers by the two algorithms because the fountain and many low infrastructures in the test site exhibit no strong scattering, which can be seen in Fig. 13(b). Comparing to \(\gamma\)-Net, the proposed algorithm detected less single scatterers (\(33.30\%\)), but more double scatterer. Comparison between the double scatterers detected by both algorithms shows that the proposed algorithm is able to detect 95.2% of the double scatterers detected by \(\gamma\)-Net. Moreover, it detects 50% more double scatterers than \(\gamma\)-Net.
Further investigation was conducted to inspect the improvement of double scatterer detection. The histogram of detected double scatterers' elevation difference from the proposed algorithm and \(\mathbf{\gamma}\)-Net is shown in Fig. 15. In the non-super-resolution region, especially when the distance between double scatterers is larger than twice R
\begin{table}
\begin{tabular}{l c} \hline \hline parameter & value \\ \hline slant-range resolution & 0.6m \\ azimuth resolution & 1.1m \\ acquisition time & 2008-2010 \\ range distance & 704km \\ incidence angle & \(31.8^{\circ}\) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: System parameters of the TerraSAR-X high-resolution spotlight image stack.
Fig. 11: Effective detection rate \(\rho_{d}\) of the two algorithms as a function of phase difference \(\triangle\phi\) under the case: \(N=25\), \(SNR=6\)dB and \(\alpha=0.6\).
Fig. 12: Effective baselines of the 50 acquisitions.
Fig. 10: Effective detection rate of the two algorithms w.r.t. the normalized elevation distance at different amplitude ratios.
algorithms have comparable performance of double scatterers detection. However, in the super-resolution region, the proposed algorithm delivers obviously stronger resolution ability.
## V Discussion
### _Generalization ability against baselines discrepancy_
The effective baseline in a SAR image varies according to the range and azimuth location. A deep learning model trained with a fixed set of baselines may have undesired performance when being applied to the whole image stack, as baselines discrepancies between training and testing data may cause data domain shift. In this experiment, we verify the generalization ability against baselines discrepancies. The network with 6 CV-SMGUs is trained using 25 regularly distributed baselines as introduced in the simulation setup. Then we add random perturbation uniformly distributed in the range [5m, 10m], i.e. about [\(7\%,14\%\)] of standard deviation of the 25 regularly distributed baselines, to the 25 regularly distributed baselines. 100 different baselines distributions were generated. For each of baseline distribution, we carry out a Monte Carlo simulation at 6 dB SNR for each baselines distribution with 0.2 million Monte Carlo trials at each discrete normalized distance. Fig. 16 demonstrates the effective detection rate of the proposed algorithm when we apply the pre-trained network to the data generated with baselines perturbations. The red line represents the reference, i.e. the pre-trained network is applied to data simulated with the same baselines distribution. The green line indicates the average effective detection rate of the 100 Monte Carlo simulations with the blue error bars depicting the standard deviation. As one can see, the proposed algorithm shows a good generalization ability against baselines discrepancy with the effective detection rate decreasing only \(5\%\) to \(8\%\) comparing to the reference. Therefore, we see the proposed algorithm as a promising tool for large-scale TomoSAR processing since the biggest baselines difference of a typical spaceborne SAR image will not exceed the perturbation we simulated.
However, for baselines with large perturbation or even completely different distribution, the proposed algorithm is not an estimation efficient method. We carried out an additional experiment to test the boundary of the generalization ability by further increasing baseline perturbation. As we can see in Fig. 17, with the increase of the baseline discrepancy, the effective detection rate deceases slowly at first. While when the perturbation is larger than 15m, the performance of the proposed algorithm degrades dramatically. According to the test result, it indicates that 15m might be the boundary for the proposed algorithm to have reasonable performance for the baseline setting in this simulation.
When we set out sights on global urban mapping using TomoSAR, the huge discrepancy between baselines of different data stacks will be a severe challenge. We still need to explore a more general and also computationally efficient algorithm.
### _Convergence analysis_
In this section, we propose to investigate the influence of CV-SGMUs on the convergence performance in comparison with CV-SLSTMs. We use a RNN with 6 CV-SLSTMs as a baseline. Fig. 18 compares the objective loss (equation 10) with increasing training epochs. From Fig. 18, we can observe that CV-SMGUs contribute to faster convergence.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline \multirow{2}{*}{Algorithm} & \multicolumn{3}{c}{Percentage of detection as} \\ & 0 scatterer & 1 scatterer & 2 scatterers \\ \hline proposed & 62.01 \(\%\) & 33.30 \(\%\) & 4.69 \(\%\) \\ \(\gamma\)-Net & 61.06 \(\%\) & 35.83 \(\%\) & 3.11 \(\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE V: Percentage of scatterers detection for the two algorithms.
Fig. 13: Test site. (a): optical image from Google Earth, (b): SAR mean intensity image
Fig. 14: Reconstructed and color-coded elevation of detected scatterers. From left to right: Elevation estimates derived by the proposed algorithm and \(\gamma\)-Net, respectively. From top to bottom: Color-coded elevation of detected single scatterers, top layer of detected double scatterers and bottom layer of detected double scatterers, respectively.
To be specific, the RNN with CV-SMGUs needs only about 500 epochs to achieve convergence, while the RNN with CV-SLSTMs requires more than 1000 epochs to converge. Furthermore, CV-SMGUs lead to slightly lower overall cost than CV-SLSTMs.
### _Requirement of training data_
As we have clarified in previous sections, the CV-SMGU has only one gate, i.e. the minimum number of gates, thus it has less trainable parameters and simpler structure. In this experiment, we study how this simpler model contributes to reducing the requirement of training data. We compare two RNNs with 6 CV-SMGUs and 6 CV-SLSTMs, respectively, in term of effective detection rate at 6dB SNR. The distance between double scatterers was fixed at 0.6 Rayleigh resolution and the double scatterers were set to have identical phase and amplitude. The result is demonstrated in Fig. 19. As can be seen, the RNN with CV-SMGUs has better performance when the two RNNs are trained with the same amount of training samples. In addition, the RNN with CV-SMGUs requires obviously less training samples to achieve optimal performance.
## VI Conclusion
In this paper, we proposed a novel gated RNN based BPDN solver for sparse reconstruction. The proposed gated RNN adopted a novel architecture, termed as sparse minimal gated unit (SMGU), to avoid information loss caused by shrinkage by incorporating historical information into optimization. With
Fig. 16: Effective detection rate as a function of \(\alpha\) at different baselines distribution. The proposed algorithm shows a good generalization ability against baselines discrepancy with the effective detection rate decreasing only \(5\%\) to \(8\%\).
Fig. 17: Effective detection rate as a function of \(\alpha\) at baselines with increasing perturbation. Firstly, the effective detection rate decreases slowly with the increase of the baseline perturbation. While when the perturbation is larger than 15m, the performance of the proposed algorithm degrades dramatically.
Fig. 18: Training loss [dB] vs. epochs on simulated data. CV-SMGUs has faster convergence and lower overall loss.
Fig. 15: Histogram of the elevation distance between the detected double scatterers from the proposed algorithm and \(\gamma\)-Net. The proposed algorithm shows significantly more detection in the super-resolution region.
the assistance of SMGUs, we are able to capture and maintain long-term dependence from information in previous layers. To be specific, important information will be automatically accumulated while useless or redundant information will be forgotten in the dynamic of the network. Moreover, we extended the SMGU to the complex-valued domain as CV-SMGU and applied it to solve TomoSAR inversion. Laboratory and real data experiments demonstrated that the proposed gated RNN built with CV-SMGUs outperforms the state-of-the-art deep learning based TomoSAR method \(\mathbf{\gamma}\)-Net. The encouraging results open up a new prospect for SAR tomography using deep learning and motivate us to further investigate the potential of RNNs with gated units in practical TomoSAR processing.
### \(\mathbf{\gamma}\)_-Net formulation_
Fig. 20 illustrates us a K-layer \(\mathbf{\gamma}\)-Net. Each block in Fig. 20 indicates one layer of \(\mathbf{\gamma}\)-Net and is formally defined as:
\[\tilde{\mathbf{\gamma}}_{i}=\eta_{ss\theta_{i}^{2}}[\tilde{\mathbf{\gamma}}_{i-1}+ \tilde{\mathbf{W}}^{i}(\tilde{\mathbf{g}}-\tilde{\mathbf{R}}\tilde{\mathbf{\gamma} }_{i-1}),\mathbf{\theta}_{i}\} \tag{12}\]
where
\[\tilde{\mathbf{W}}^{i}=\left[\begin{array}{cc}\mathrm{Re}(\mathbf{W}^{i})&- \mathrm{Im}(\mathbf{W}^{i})\\ \mathrm{Im}(\mathbf{W}^{i})&\mathrm{Re}(\mathbf{W}^{i})\end{array}\right], \tilde{\mathbf{R}}=\left[\begin{array}{cc}\mathrm{Re}(\mathbf{R})&- \mathrm{Im}(\mathbf{R})\\ \mathrm{Im}(\mathbf{R})&\mathrm{Re}(\mathbf{R})\end{array}\right],\tilde{ \mathbf{R}}=\left[\begin{array}{cc}\mathrm{Re}(\mathbf{R})&\\ \mathrm{Im}(\mathbf{R})&\mathrm{Re}(\mathbf{R})\end{array}\right],\] \[\tilde{\mathbf{g}}=\left[\begin{array}{cc}\mathrm{Re}(\mathbf{ g})\\ \mathrm{Im}(\mathbf{g})\end{array}\right],\tilde{\mathbf{\gamma}}_{i}\]
\(\mathbf{\theta}_{i}\)=\([\theta_{i}^{1},\theta_{i}^{2},\cdots,\theta_{i}^{5}]\) denotes the set of parameters to be learned for the piecewise linear function in the \(i^{th}\) layer. \(\mathbf{W}^{i}\) indicates the trainable weight matrix in the \(i^{th}\) layer and it is initialized using the system steering matrix \(\mathbf{R}\) with \(\mathbf{W}^{i}=\beta\mathbf{R}^{H}\). \(\beta\) is the stepsize. Usually, a proper step size can be taken as \(\frac{1}{L_{s}}\), with \(L_{s}\) being the largest eigenvalue of \(\mathbf{R}^{H}\mathbf{R}\). \(\hat{\mathbf{\gamma}}_{i}\) is the output of the \(i^{th}\) layer. \(\mathrm{Re}(\cdot)\) and \(\mathrm{Im}(\cdot)\) denote the real and imaginary operators, respectively.
\(\mathbf{SS}\) in \(\mathbf{\gamma}\)-Net indicates a special thresholding scheme called support selection, which is formally defined as follows:
\[\eta_{ss\theta_{i}^{2}}(\tilde{\mathbf{\gamma}}_{i})=\left\{\begin{array}{ll} \tilde{\mathbf{\gamma}}_{i}&i\in\mathcal{S}^{\rho^{i}}(\tilde{\mathbf{\gamma}})\\ \eta_{pvel}(\tilde{\mathbf{\gamma}}_{i},\mathbf{\theta_{i}})&i\notin\mathcal{S}^{\rho ^{i}}(\tilde{\mathbf{\gamma}})\end{array}\right., \tag{13}\]
In the \(i^{th}\) layer, the support selection will select \(\rho^{i}\) percentage of entries with the largest magnitude and trust them as "true support", which will be directly fed to the next layer, bypassing the shrinkage step. The remaining part will go through the shrinkage step as usual. The shrinkage is executed using the piecewise linear function \(\eta_{pvel}\), which is a novel shrinkage thresholding function to promote sparsity while improving convergence rate and reducing reconstruction error in the meanwhile and expressed as:
\[\eta_{pvel}(\hat{\mathbf{\gamma}},\mathbf{\theta}_{i})=\left\{\begin{array}{ll} \theta_{i}^{3}\tilde{\mathbf{\gamma}},&|\hat{\mathbf{\gamma}}|\leq\theta_{i}^{1}\\ \\ e^{j\cdot Z\tilde{\mathbf{\gamma}}}[\theta_{i}^{4}(|\hat{\mathbf{\gamma}}|-\theta_{i} ^{1})+\\ \theta_{i}^{3}\theta_{i}^{1}],&\theta_{i}^{1}<|\hat{\mathbf{\gamma}}|\leq\theta_{i }^{2}\\ \\ e^{j\cdot Z\tilde{\mathbf{\gamma}}}[\theta_{i}^{5}(|\hat{\mathbf{\gamma}}|-\theta_{i} ^{2})+\\ \theta_{i}^{4}\left(\theta_{i}^{2}-\theta_{i}^{1}\right)+\theta_{i}^{3}\theta_{ i}^{1}],&|\hat{\mathbf{\gamma}}|>\theta_{i}^{2}\end{array}\right. \tag{14}\]
|
2307.02212 | Electric Polarization from Many-Body Neural Network Ansatz | Ab initio calculation of dielectric response with high-accuracy electronic
structure methods is a long-standing problem, for which mean-field approaches
are widely used and electron correlations are mostly treated via approximated
functionals. Here we employ a neural network wavefunction ansatz combined with
quantum Monte Carlo to incorporate correlations into polarization calculations.
On a variety of systems, including isolated atoms, one-dimensional chains,
two-dimensional slabs, and three-dimensional cubes, the calculated results
outperform conventional density functional theory and are consistent with the
most accurate calculations and experimental data. Furthermore, we have studied
the out-of-plane dielectric constant of bilayer graphene using our method and
re-established its thickness dependence. Overall, this approach provides a
powerful tool to consider electron correlation in the modern theory of
polarization. | Xiang Li, Yubing Qian, Ji Chen | 2023-07-05T11:38:50Z | http://arxiv.org/abs/2307.02212v2 | # Electric Polarization from Many-Body Neural Network Ansatz
###### Abstract
_Ab initio_ calculation of dielectric response with high-accuracy electronic structure methods is a long-standing problem, for which mean-field approaches are widely used and electron correlations are mostly treated via approximated functionals. Here we employ a neural network wavefunction ansatz combined with quantum Monte Carlo to incorporate correlations into polarization calculations. On a variety of systems, including isolated atoms, one-dimensional chains, two-dimensional slabs, and three-dimensional cubes, the calculated results outperform conventional density functional theory and are consistent with the most accurate calculations and experimental data. Furthermore, we have studied the out-of-plane dielectric constant of bilayer graphene using our method and re-established its thickness dependence. Overall, this approach provides a powerful tool to consider electron correlation in the modern theory of polarization.
Electric polarization plays a crucial role in electromagnetic phenomena such as ferroelectricity and piezoelectricity. Despite its significance, a proper microscopic definition of polarization was only formulated in the 1990s [1; 2], which revealed the hidden relation between physical polarization and the Berry phase of solid systems. This theoretical advance leads to successful calculations of the dielectric response of solid materials from first principles [3; 4; 5], which is critical in several fields of condensed matter physics, such as the ferroelectric and topological materials [6]. However, the underlying electronic structure methods are mostly mean-field approaches such as density functional theory (DFT) [7], which has its limitation because the result depends heavily on the so-called exchange-correlation functional. Exchange-correlation functionals can not fully account for the exact correlation effects of electrons. In particular, widely used semi-local functionals often produce an excessive overestimate of electric susceptibility [5; 8]. Although correlated wavefunction methods, such as coupled-cluster theory can also be employed to calculate polarization [9], their high computational complexity hinders their application in solid systems. Furthermore, most of these correlated electronic structure methods are limited in open boundary conditions (OBC) for polarization calculations, which leads to slow convergence and heavy computational costs towards the thermodynamic limit (TDL), see Fig. 1 for a summary of the state-of-the-art methods in polarization calculations.
In addition to the conventional deterministic electronic structure methods mentioned above, quantum Monte Carlo (QMC) methods are also widely adopted for electronic structure calculations, showing favorable computational scaling and high accuracy [10; 11; 12]. Pioneering works to study electric susceptibility using QMC have been reported [13; 14], in which traditional Slater Jastrow type wavefunction is combined with diffusion Monte Carlo (DMC) to study polarization of hydrogen chains in periodic boundary conditions (PBC). The main difficulty for DMC is to write down the local self-consistent Hamiltonian under a finite electric field and run calculations iteratively. Despite the promising results on hydrogen chain [13], there are still grand challenges: multiple loops of DMC are needed for the self-consistent procedure, a complex forward walking strategy is required for evaluat
Figure 1: A brief illustration of the computational cost and accuracy of different electronic structure methods in polarization calculations. \(N\) denotes the number of electrons in the system. High-level correlated wavefunction methods are not shown in the PBC panel because they have not been applied to PBC polarization calculations so far. QMC: quantum Monte Carlo; DFT: density functional theory; HF: Hartree-Fock; MP2: second-order Moller-Plesset perturbation theory; CCSD: coupled cluster with single and double excitations; FCI: full configuration interaction.
ing the polarization, and the quality of trial wavefunction affects the accuracy of DMC. Therefore, it is desirable to develop more accurate and efficient approaches to calculate the electric polarization of solid systems.
In recent years, there has been significant progress in the application of neural network in the electronic structure community. Neural network wavefunction ansatz combined with QMC simulations has demonstrated higher accuracy with lower computational complexity than conventional high-order wavefunction methods [15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33]. The expressiveness of neural network overcomes the main bottleneck of traditional wavefunction ansatz in QMC, making the approach a competitive option for state-of-the-art electronic structure calculation. So far, the neural network QMC calculations have shown great power in treating spin systems [15; 16; 17], molecules [18; 19; 20; 21], periodic models [25; 26; 27; 28], and real solids [29; 28].
In this work, we extend the neural network QMC calculation to the electric polarization of solid systems. Specifically, we employ a recently developed solid neural network, dubbed DeepSolid [28], in conjunction with variational Monte Carlo (VMC). Anthetic sampling [34] is developed for efficient computation of the Berry phase and thus the electric polarization. Our approach has been tested on a diverse range of systems, including isolated atoms, one-dimensional chains, two-dimensional slabs, three-dimensional cubes, and bilayer graphene. The results demonstrate clear advantages of our approach over traditional methods.
To introduce our methodology, let us consider a crystal system under a finite electric field \(\mathbf{E}\), the enthalpy of this system is formulated below [3; 4; 13; 37]
\[F[\psi]=\frac{\langle\psi|\hat{H}_{S}|\psi\rangle}{\langle\psi|\psi\rangle}- \Omega_{S}\mathbf{E}\cdot\mathbf{P}[\psi]\, \tag{1}\]
where \(\hat{H}_{S}\) denotes the supercell Hamiltonian in the absence of electric field \(\mathbf{E}\), and \(\Omega_{S}\) is the supercell volume. The term \(-\Omega_{S}\mathbf{E}\cdot\mathbf{P}\) represents the interaction between electric polarization density \(\mathbf{P}\) and electric field. However, a proper microscopic definition of \(\mathbf{P}[\psi]\) remains absent for decades since the ordinary position operator \(\hat{\mathbf{r}}\) violates the periodic boundary condition. This problem was finally solved after recognizing the polarization as the Berry phase in the Brillouin zone, according to which the polarization can be extracted from a general wavefunction \(\psi\) as follows [38]
\[\begin{split}\mathbf{P}[\psi]&=-\frac{1}{\Omega_{S}} \sum_{i}\frac{\mathbf{a}_{i}}{2\pi}\text{Im}\ln\frac{\langle\psi|\hat{U}_{i}| \psi\rangle}{\langle\psi|\psi\rangle}\,\\ \hat{U}_{i}&=\exp\left[\text{i}\mathbf{b}_{i}\cdot \left(\sum_{e}\hat{\mathbf{r}}_{e}-\sum_{I}Z_{I}\mathbf{R}_{I}\right)\right]\,\end{split} \tag{2}\]
where \(\mathbf{a}_{i},\mathbf{b}_{i}\) denote lattice and reciprocal lattice vectors of the supercell. \(\hat{U}_{i}\) serves as a periodic generalization of the position operator \(\hat{\mathbf{r}}\) in solid systems and \(\text{Im}\ln(x)\) is used to extract the Berry phase within \(x\). Note that \(\hat{U}_{i}\) is an intrinsic many-body operator which includes all the electron coordinates in the exponent. A charge-weighted sum of ion coordinates \(Z_{I}\mathbf{R}_{I}\) is also included to achieve translation invariance of polarization.
With the enthalpy functional formulated above, traditional methods usually start with a Hartree-Fock (HF) ansatz, which is typically expressed as follows
\[\psi_{\text{HF}}(\mathbf{r})=\text{Det}\left[e^{\text{i}\mathbf{k}_{i}\cdot \mathbf{r}_{j}}u_{\mathbf{k}_{i}}(\mathbf{r}_{j})\right]. \tag{3}\]
Electrons are treated independently of each other with a mean-field interaction in Eq. (3), simplifying quantum many-body problems but also deviating from the ground truth. To fully treat the electron correlation effects, we employ a correlated neural network wavefunction \(\psi_{\text{net}}\) from DeepSolid [28], whose general form reads
\[\psi_{\text{net}}(\mathbf{r})=\text{Det}\left[e^{\text{i}\mathbf{k}_{i}\cdot \mathbf{r}_{j}}u_{\mathbf{k}_{i}}(\mathbf{r}_{j};\mathbf{r}_{\neq j})\right], \tag{4}\]
where \(\mathbf{r}_{\neq j}\) denotes all the electron coordinates except \(\mathbf{r}_{j}\). Eq. (4) resembles the form of the traditional Bloch function, while cell-periodic functions \(u_{\mathbf{k}}\) are now represented using deep neural networks that rely on all electrons to accommodate electron correlations [19]. Electron features \(\mathbf{r}_{i}\) are converted to be periodic and permutation equivariant before being fed into neural networks, and complex-valued orbitals \(u_{\mathbf{k}}\) are constructed with a pair of neural networks outputting the real and imaginary part respectively. As a result, Fermionic anti-symmetry, periodicity, and complex-valued nature are all encoded in our network, promoting it to be a legitimate and expressive ansatz for solid. See Ref. [28] for more details of the architecture.
Using the neural network we have constructed, the enthalpy functional outlined in Eq. (1) can be efficiently minimized through variational Monte Carlo, allowing for gradual convergence to the ground truth. However, there have been significant fluctuations observed in \(\hat{U}_{i}\) evaluation, which seriously impedes optimization. As a solution, antithetic sampling is employed in the Monte Carlo evaluation, which reads
\[\begin{split}\langle\hat{U}_{i}\rangle&=\frac{ \int d\mathbf{r}\ |\psi(\mathbf{r})|^{2}\ U_{i}(\mathbf{r})}{\int d\mathbf{r}\ |\psi(\mathbf{r})|^{2}}=\frac{\int d\mathbf{r}\ |\psi(\mathbf{r})|^{2}\ \widetilde{U}_{i}(\mathbf{r})}{ \int d\mathbf{r}\ |\psi(\mathbf{r})|^{2}},\\ &\widetilde{U}_{i}(\mathbf{r})=\frac{1}{2}\left[U_{i}(\mathbf{r} )+\frac{|\psi(-\mathbf{r})|^{2}}{|\psi(\mathbf{r})|^{2}}U_{i}(-\mathbf{r}) \right].\end{split} \tag{5}\]
And thus the fluctuations are significantly reduced through the cancellation between \(U_{i}(\mathbf{r})\) and its inverted image \(U_{i}(-\mathbf{r})\). It's worth noting that centrosymmetric cells are assumed in Eq. (5), and one can choose other images for cancellation if central symmetry is not satisfied. To further improve efficiency, we have employed a Kronecker-factored curvature estimator (KFAC) optimizer [39], which effectively integrates second-order information into the optimization process, surpassing traditional optimizers. See the Supplementary Material for more computational details, and the code of this work is
developed at the open-source repository of DeepSolid 1.
Footnote 1: [https://github.com/bytedance/DeepSolid](https://github.com/bytedance/DeepSolid)
Isolated atoms are the first systems selected for direct comparison with the most accurate methods and experimental data. In our calculations, we place a single atom in a large enough box to eliminate periodic image interactions. The calculated polarizability \(\alpha\) is shown in Tab. 1, which measures the linear response of the dipole moment to the applied field and has some subtle relation with the bulk susceptibility \(\chi\) (see Supplementary Material). Results from DFT with the B3LYP functional, HF, and CCSD(T) under OBC are also listed for comparison. P-state atoms (B, C, O, F) are skipped because their anisotropy requires special treatments [40]. As can be seen from Tab. 1, although B3LYP is widely-trusted functional belonging to the fourth rung of the so-called Jacob's ladder of DFT, it consistently deviates from the ground truth and has a relatively large mean absolute error (MAE). The behavior of DFT is due to the inaccuracy in treating the exchange-correlation effects, which can be very different for energy and polarization calculations. In HF calculations, because of the explicit treatment of non-local exchange, deviations in polarization are significantly reduced. CCSD(T) is the coupled cluster theory with single, double, and perturbative triple excitations, and is considered a very accurate method in the literature. It further incorporates correlation effects on top of HF wavefunction and achieves smaller MAE than HF results. Overall, DeepSolid results are comparable with CCSD(T), showing that the exchange-correlation treatments in our neural network are accurate and reliable for polarization calculations.
Having demonstrated our technique with single atoms, we proceed to simulate periodic systems by arranging bonded molecules into a one-dimensional chain and a two-dimensional slab. These systems are widely known as challenging cases for conventional DFT methods, which would have a serious overestimation of their longitudinal susceptibility. This problem stems from the fact that surface charges are insensitive to the bulk charge within the system when non-local interactions are absent, and this can be solved using more accurate _ab initio_ methods [5]. For the one-dimensional case, hydrogen chain (n H\({}_{2}\)) and polyyne (n C\({}_{2}\)) are studied, and the simulation size is pushed to 22 H\({}_{2}\) and 7 C\({}_{2}\) respectively for TDL convergence. Correlation-consistent effective core potential (ccECP) is employed for polyyne to accelerate neural network optimization and reduce fluctuation [31; 44]. The final results are plotted in Fig. 2, which show that susceptibility calculated by DeepSolid agrees well with correlated wavefunction methods CCSD(T) and random phase approximation (RPA). Local-density approximation (LDA) functional deviates severely from the ground truth for one-dimensional chains [9], but the use of hybrid functions such as B3LYP leads to partial recovery of non-local exchange effects and, consequently, a reduction in the overshot. HF is much better than DFT calculations, which further proves the importance of the
\begin{table}
\begin{tabular}{c c c c c c|c|c} & H & He & Li & Be & N & Ne & MAE \\ \hline B3LYP & 5.187 & 1.485 & 142.727 & 43.090 & 7.711 & 2.838 & 4.669 \\ HF & 4.484 & 1.318 & 169.231 & 45.441 & 7.138 & 2.365 & 2.243 \\ CCSD(T) & 4.484 & 1.372 & 165.803 & **37.707** & **7.212** & 2.642 & 0.326 \\ DS & **4.51** & **1.39** & **165.0** & 36.95 & 7.16 & **2.67** & 0.32 \\ Recommended & 4.5(exact) & 1.38375(2) & 164.1125(5) & 37.74(3) & 7.4(2) & 2.66110(3) & 0 \\ \end{tabular}
\end{table}
Table 1: Calculated atom polarizability in atomic unit (Bohr\({}^{3}\)). DeepSolid results are labeled as DS. B3LYP, HF, and CCSD(T) results are calculated with PySCF [35] in the def2-qxyppd basis set and non-relativistic limit. Recommended data is taken from Ref. [36], which is deduced from experiment data and the most accurate calculations.
Figure 2: Calculations of chains and slabs. DeepSolid results are labeled as DS. (**a**) illustrations of hydrogen chain, polyyne, hydrogen slab, and the applied electric field. (**b**) hydrogen chain, (**c**) polyyne, (**d**) hydrogen slab susceptibilities \(\chi\). \(\Omega_{p}\) denotes the volume of the primitive cell. For the hydrogen system, LDA and HF results were calculated with the 3-21G basis set under PBC [5]. CCSD(T) calculations were performed with the 6-311G** basis set under OBC [9]. Intrapair and interpair distances are set to 2 and 3 Bohr respectively for the hydrogen chain. The interchain distance is set to 4.724 Bohr for the slab. For polyyne, the alternating distance between carbon atoms is set to 1.18 and 1.4 Å. LDA, HF, and RPA results were calculated with 6-31G, 3-21G, and 4-31G basis set respectively under OBC [41; 42; 43].
non-local exchange effect for electric polarization calculation in this system. As we arrange hydrogen chains periodically to form hydrogen slabs, the computational cost of high-level deterministic wavefunction methods, such as CCSD(T), grows rapidly and is soon beyond reach. However, our approach has a lower scaling and we can obtain the first accurate polarization calculation for such a hydrogen slab (Fig. 2d). And for the slab, the performances of DFT and HF compared with our accurate neural network results are similar to those observed for the chains.
To further test our method, we applied it to alkali metal hydrides and calculated their dielectric constants, allowing direct comparison with experimental results. These systems have a simple structure, consisting of alternating cations and anions, but they are of considerable research significance due to their relevance in hydrogen storage applications [49]. The high-frequency dielectric constant \(\epsilon_{\infty}\) can be extracted through optical experiments from the following relations,
\[\begin{split}&\mathbf{D}=\epsilon_{\infty}\mathbf{E}=\mathbf{E}+4 \pi\mathbf{P}\,\\ &\epsilon_{\infty}=1+4\pi\chi=n_{\mathrm{D}}^{2}\,\end{split} \tag{6}\]
where \(n_{\mathrm{D}}\) denotes the corresponding refractive index. In the visible light regime, ions are almost frozen relative to the incident light frequency and this leads to the dominance of electric polarization in \(\epsilon_{\infty}\). These three-dimensional systems are also qualitatively different from the chains and slabs, because the fluctuation of \(\hat{U}_{i}\) increases as we tile the cells in all three direction to form the simulation cell. To balance the influence from finite-size error and \(\hat{U}_{i}\) fluctuations, we tile the conventional cell in the direction of the applied field \(\mathbf{E}\) and the transverse directions remain unchanged. Moreover, Burkatzki-Filippi-Dolg (BFD) pseudopotential [50] is used to remove inertial core electrons [31]. Our calculations have been pushed to the \(4\times 1\times 1\) supercell and the results are plotted in Fig. 3. LDA and Perdew-Burke-Ernzerhof (PBE) results are also plotted for comparison, while more accurate conventional wavefunction methods are not applicable due to computational costs. As we can see, numerical simulations and experiments agree that \(\epsilon_{\infty}\) decreases as the alkali metal atom becomes heavy, since \(\epsilon_{\infty}\) is inversely proportional to the cell volume in Eq. (6). However, LDA and PBE functionals [51] tend to overestimate \(\epsilon_{\infty}\), and the error is largest in CsH. In contrast, our DeepSolid results agree well with the experiment for all systems, which manifests the capability of neural network wavefunction to capture non-local exchange and correlation effects.
After demonstrating the accuracy of our methods in previous sections, we now proceed to apply our method to bilayer graphene (BLG), an extensively studied 2D material system known for its rich electronic properties. Despite its fundamental importance, the precise value of the dielectric constant of BLG remains elusive and has been an important subject of both experimental and theoretical works [52; 53]. Specifically, theoretical calculations reported were either restricted to DFT level [52] or based on values calculated with monolayer graphene [53]. Here we use DeepSolid to directly calculate the out-of-plane dielectric constant \(\epsilon_{\infty}^{\perp}\) of bilayer graphene. \(2\times 2\) supercells containing monolayer and equilibrium AA-stacked bilayer graphene were used. The calculated monolayer polarizability equals 5.7(1) Bohr\({}^{3}\) and bilayer polarizability equals 11.6(1) Bohr\({}^{3}\), which agrees with the linear dependence of polarizability on the number of layers as shown in Ref. [52]. Based on this linear dependence and following Ref. [54], one can derive the expression of the out-of-plane dielectric constant as a function of the layer separation \(d\) (see Supplementary Material):
\[\epsilon_{\infty}^{\perp}(d)=\left(1-\frac{2\pi\alpha_{\mathrm{equil}}^{ \mathrm{BLG}}}{S\cdot d}\right)^{-1}\, \tag{7}\]
where \(S=5.25\) A\({}^{2}\) denotes the area of the primitive cell. Using the computed polarizability \(\alpha_{\mathrm{equil}}^{\mathrm{BLG}}\), we can re-establish the relation of \(\epsilon_{\infty}^{\perp}\), which is plotted in Fig. 4. To further check Eq. (7), we also calculate the polarizability of bilayer graphene at slightly larger (4A) and smaller
Figure 4: Calculated effective 2D dielectric constant \(\epsilon_{\infty}^{\perp}\) of bilayer graphene. (**a**) plot of bilayer graphene under electric field. (**b**) calculated \(\epsilon_{\infty}^{\perp}\) as a function of graphene layer distance \(d\). The equilibrium separation of BLG is set to 3.347 Å.
Figure 3: Calculated high-frequency dielectric constant \(\epsilon_{\infty}\) of alkali metal hydrides XH. DeepSolid results are labeled as DS. LDA and PBE results in PBC are taken from Ref. [45]. Experimental data are taken from Refs. [46; 47; 48], which are derived from refractive indexes \(n_{\mathrm{D}}\) for sodium doublet (589.29 nm) via Eq. (6). RbH experiments are absent.
(3A) layer distances and plot the corresponding dielectric constant in Fig. 4, and the results agrees well with each other. Moreover, there are two notable limits when varying the layer separation \(d\): as \(d\) decreases, two graphene layers coincide with each other and the system becomes metallic, which explains the diverging of \(\epsilon_{\infty}^{\perp}\); as \(d\) becomes large, BLG polarization becomes negligible and vacuum contribution dominants in \(\epsilon_{\infty}^{\perp}\) which approaches unity. The thickness-dependent dielectric constant will be valuable for further understanding and tuning the stacked multilayer graphene systems.
In conclusion, this work proposes an efficient and accurate method for investigating solid polarization based on the recently developed solid neural network wavefunction combined with quantum Monte Carlo. Our approach demonstrates superiority over the conventional state-of-the-art electronic structure methods. In the future, with the proposed framework, it is promising to investigate a wide range of phenomena, including ferroelectricity, topological electronic transport, quantum Hall effect, and orbital magnetization, among others, on a higher level of accuracy and with electron correlations accounted for properly. Furthermore, this work provides more possibilities for utilizing neural network applications in condensed matter physics.
## Acknowledgements
We want to thank ByteDance Research Group for inspiration and encouragement. This work is directed and supported by Hang Li and ByteDance Research. J.C. is supported by the National Natural Science Foundation of China under Grant No. 92165101.
|
2304.04455 | Bayesian optimization for sparse neural networks with trainable
activation functions | In the literature on deep neural networks, there is considerable interest in
developing activation functions that can enhance neural network performance. In
recent years, there has been renewed scientific interest in proposing
activation functions that can be trained throughout the learning process, as
they appear to improve network performance, especially by reducing overfitting.
In this paper, we propose a trainable activation function whose parameters need
to be estimated. A fully Bayesian model is developed to automatically estimate
from the learning data both the model weights and activation function
parameters. An MCMC-based optimization scheme is developed to build the
inference. The proposed method aims to solve the aforementioned problems and
improve convergence time by using an efficient sampling scheme that guarantees
convergence to the global maximum. The proposed scheme is tested on three
datasets with three different CNNs. Promising results demonstrate the
usefulness of our proposed approach in improving model accuracy due to the
proposed activation function and Bayesian estimation of the parameters. | Mohamed Fakhfakh, Lotfi Chaari | 2023-04-10T08:44:44Z | http://arxiv.org/abs/2304.04455v2 | # Bayesian optimization for sparse neural networks
###### Abstract
In the literature on deep neural networks, there is considerable interest in developing activation functions that can enhance neural network performance. In recent years, there has been renewed scientific interest in proposing activation functions that can be trained throughout the learning process, as they appear to improve network performance, especially by reducing overfitting. In this paper, we propose a trainable activation function whose parameters need to be estimated. A fully Bayesian model is developed to automatically estimate from the learning data both the model weights and activation function parameters. An MCMC-based optimization scheme is developed to build the inference. The proposed method aims to solve the aforementioned problems and improve convergence time by using an efficient sampling scheme that guarantees convergence to the global maximum. The proposed scheme is tested on three datasets with three different CNNs. Promising results demonstrate the usefulness of our proposed approach in improving model accuracy due to the proposed activation function and Bayesian estimation of the parameters.
Activation function, Deep neural networks, Optimization, MCMC, Hamiltonian dynamics
## I Introduction
Classification is a machine-learning task that identifies which objects are present in an image or video. It is critical for various applications such as computer vision [1, 2, 3], medical diagnostics [4], signal processing [5], and others. The process involves learning significant or nontrivial relationships from a set of training data and extending these relationships to interpret new test data [6]. The task involves categorizing elements into one of the finite set of classes by comparing the measured attributes of a given object with the known properties of objects to determine whether the object belongs to a specific category.
Convolutional Neural Networks (CNNs) [7, 8, 9, 10, 11] have become the industry standard in numerous applications over the past two decades, as they can process complex high-dimensional input data into simple low-dimensional concepts through a series of nonlinear transformations. Each feature layer in CNNs comprises features from the layer below, creating a hierarchical organization of ever-more-abstract concepts. They are particularly effective at capturing high-level abstractions in real-world observations, making them a popular choice for image classification tasks. CNNs can learn features from raw image pixels, reducing the need for manual feature engineering. This makes them well-suited for tasks such as object recognition, where the goal is to identify the presence and location of specific objects in an image. In this context, the activation function plays a critical role in learning representative features. The Rectified Linear Unit (ReLU) [12] is currently the most widely used activation function for neural networks. When provided with a positive argument, the ReLU activation function is either zero or the identity. ReLUs have the additional benefit of alleviating the vanishing gradient problem, in addition to providing sparse codes [13]. According to the state of the art, activation functions can be fixed or trainable during a learning phase [14]. In most cases, gradient descent is the most widely used method in the literature for parameter estimation.
From another side, Bayesian methods have advanced significantly in many domains over time and have numerous useful applications. The primary idea is to represent all uncertainties in the model using probabilities. Bayesian techniques are distinctive in that they treat the problem as an inference problem [15]. One of the most significant advantages is the ability to incorporate prior information about the model parameters and hyperparameters. Recent advancements in Markov Chain Monte Carlo (MCMC) methods [16, 17, 18, 19] make it easier to use Bayesian analyses in complex datasets with missing observations and to handle multidimensional outcomes. Recent studies have demonstrated that using a Bayesian framework in CNNs for the optimization process leads to more promising performances than standard gradient descent [20, 21].
In this study, we introduce a new trainable activation function. The parameters of the proposed function are automatically estimated from the data. For doing so, a Bayesian framework is used where these parameters as well as the network weights are assumed to be realizations of random variables. With an adequate likelihood, a hierarchical Bayesian model is built with priors and hyperpriors. An MCMC-based inference is then use, specifically a Gibbs sampler, to derive estimators from the target distributions. Our method is an extension of our previous work described in [22], where we employed non-smooth Hamiltonian methods to fit sparse artificial neural networks. Our main objective with the Bayesian scheme is to minimize the target cost function of the learning model. The use of non-smooth Hamiltonian techniques enables us to perform efficient and fast sampling, even when dealing with non-differentiable energy functions that arise due to the use of sparse regularization functions.
The contribution of this paper is therefore twofold: _i)_ proposing a new trainable activation function, and _ii)_ a more general Bayesian formulation than in [22] to integrate the estimation of all parameters from the data.
The rest of this paper is organized as follows. The state of the art is the focus of Section II. Then, the Problem statement is in Section III. In section IV we detail the adopted hierarchical Bayesian model. The proposed Bayesian inference scheme is developed in Section V and validated in Section VI. Finally, The conclusion and future work are drawn in Section VII.
## II Related work
Finding the best activation function to integrate into an architecture is a challenging task. This section proposes a taxonomy of different activation functions described in the literature. A primary established classification is based on the ability to modify the shape of the activation function during the training phase, resulting in two major categories that can be distinguished [14].
### _Fixed-shape activation functions_
This category pertains to the use of activation functions in neural network research, including sigmoid [23], hyperbolic tangent (tanh) [24], and ReLU, all of which have a defined form. The introduction of rectified functions, particularly ReLU, has led to a marked enhancement in neural network performance and heightened scientific interest. Consequently, this category can be subdivided into subcategories based on their distinctive characteristics.
**Classic activation functions:**
The findings presented in [25] demonstrated that a feed-forward and shallow network can efficiently manage any continuous function defined on a compact subset. For many years, bounded activation functions like Sigmoid and tanh were the preferred choices for neural networks, with researchers demonstrating their efficacy, particularly in shallow network architectures [26]. Although Sigmoid, Bipolar sigmoid [27], Hyperbolic tangent, Absolute value [28], and other bounded activation functions are commonly used, their efficacy is limited when training multi-layer neural networks due to the vanishing gradient problem [29].
**Rectifier-based activation functions:**
The primary advantage of using rectified activation functions is to alleviate the problem of vanishing gradient. The success of ReLU [30] has inspired the development of many new activation functions during the last years [31, 32]. While ReLU has numerous benefits such as solving the vanishing gradient issue and making sparse coding easier [33], it is not without flaws. The "dying" ReLU problem [34] and non-differentiability at zero are the main concerns.
To address these issues, several variations of ReLU have been developed, such as Leaky ReLU (LReLU) [34], Truncated rectified [35], softplus [36], Exponential linear unit (ELU) [37], E-swish [38], and Flatten-T Swish [39].
### _Trainable activation functions_
The concept of using trainable activation functions is not new in the field of neural network research, with many studies published on this topic as early as the 1990s [40, 41, 42]. However, the growing interest in neural networks in recent years has led researchers to reconsider the potential benefits of trainable activation functions in improving network performance.
In this section, we discuss the main strategies proposed in the literature for learning activation functions from data. Based on their primary characteristics, these strategies can be classified into three families.
**Parameterized standard activation functions:**
In [40], a generalized hyperbolic tangent function was proposed by introducing two trainable parameters \(\alpha\) and \(\beta\), which adjust the saturation level and slope, respectively. Similarly, a sigmoid function with two trainable parameters was used in [41] to modify the activation function's shape. These parameters are learned along with the network weights using the backpropagation algorithm. More recently, the work in [43] aimed to avoid manually setting the parameter of the ELU unit by proposing an alternative based on two trainable parameters. The proposed activation function, called PELU, is defined as follows:
\[PELU(x)=\begin{cases}\frac{\beta}{\gamma}x&\text{if }x\geq 0\\ \beta\times(exp(\frac{x}{\gamma})-1)&\text{otherwise,}\end{cases} \tag{1}\]
where \(\beta\) is a trainable parameter.
A flexible ReLU function has been proposed in [44]:
\[frelu(x)=ReLU(x+\alpha)+\beta, \tag{2}\]
where \(\alpha\) and \(\beta\) are parameters learned from data. This is done to capture negative information that is lost in the classic ReLU function [12].
The activation function introduced in [45] is another type of ReLU function that partially learns its shape from the training set. Indeed, it can modify the negative part of the data via the parameter \(\alpha\). This function is called Parametric ReLU (PReLU) and can be defined as follows:
\[PReLU(x)=\begin{cases}x&\text{if }x>0\\ \alpha\times x&\text{otherwise}\end{cases} \tag{3}\]
where the parameter \(\alpha\) is learned jointly with the model using a gradient method.
**Functions based on ensemble methods:**
Functions based on ensemble methods involve combining multiple basic activation functions to form a more complex function. In [46], a method for investigating activation functions built as compositions of various basic activation functions are proposed. Similarly, [47] introduces a similar method using a genetic algorithm composed of two new activation functions: Exponential Linear Sigmoid SquasHing (EliSH) and HardELiSH. EliSH is defined as follows:
\[EliSH(x)=\begin{cases}x/(1+e^{-x})&\text{if }x\geq 0\\ (e^{x}-1)/(1+e^{-x})&\text{otherwise.}\end{cases} \tag{4}\]
The negative part of EliSH is a multiplication of two functions, ELU and Sigmoid, while the positive part is shared with Swish. HardELiSH is defined as a multiplication of HardSigmoid and ELU in the negative portion and HardSigmoid and Linear in the positive part. In [48], another interesting activation function is proposed, called the Mexican Hat Linear Unit (MeLU). This activation function solves the problems of unstable learning related to trainable parameters. Unstable learning can cause a decrease in accuracy and an increase in generalization error when the model's performance varies significantly in response to slight changes in the data or parameters. Mexican hat-type functions have a smoother curve than ReLU, which prevents saturation and allows for optimal performance.
Let \(f\) be the function defined by
\[f_{\gamma,\lambda}(x)=max(\lambda-|x-\gamma|,0), \tag{5}\]
where \(\lambda\), \(\gamma\) are real numbers. This function returns zero when \(|x-\gamma|>\lambda\). Moreover, it increases with a derivative of 1 between \(\gamma-\lambda\) and \(\gamma\), then decreases using a derivative of -1 between \(\gamma\) and \(\gamma+\lambda\). MeLU is defined for each layer as
\[MeLU(x)=PReLU(x)+\sum_{j=1}^{k-1}c_{j}f_{\gamma_{j},\lambda_{j}}(x), \tag{6}\]
where \(k\) represents the number of parameters that can be learned in each neuron. The \(c_{j}\) parameters are learnable real numbers, while \(\gamma_{j}\), \(\lambda_{j}\) are fixed parameters.
In [49], the authors introduced a new function called Adaptive Blending Units (ABU) as a trainable linear combination of a set of activation functions
\[ABU(x)=\sum_{i=1}^{k}\alpha_{i}\times f_{i}(x), \tag{7}\]
where (\(\alpha_{1}\), \(\alpha_{2}\),..., \(\alpha_{k}\)) are parameters to be learned, and {\(f_{1}\)(.), \(f_{1}\)(.),..., \(f_{k}\)(.)} is a set of activation functions. The parameters \(\alpha_{i}\) are all initialized to the value \(\frac{1}{k}\) and are trained using the gradient descent method.
Likewise, similar methods were introduced recently like Kernel-based activation function [50, 51] and Trained activation function [52].
**Activation functions based on other techniques:**
In [42], a method using polynomial functions with adjustable coefficients is proposed. Similarly, in the context of fuzzy activation functions, a neural unit based on Type-2 fuzzy logic [53] was developed. Other works have proposed functions using interpolation and spline approaches [54]. However, depending on the chosen technique, these strategies may require additional input.
### _Comparison and analysis_
Learning activation functions is a popular topic in the field of machine learning because the performance of learning architectures can be improved by using more suitable activation functions. Various approaches have been explored to enhance these performances. Most trainable activation functions are variants of standard activation functions, whose shape is adjusted using trainable parameters. Several studies on trainable activation functions have reported substantial performance improvements compared to neural network architectures equipped with classic fixed activation functions such as ReLU or sigmoid.
Other trainable activation functions can be expressed as sub-networks nested within the main architectures or those based on different approaches than classical activation functions, such as fuzzy logic.
While these activation functions have significant potential to improve neural network model performance, their implementation can be more complex and require more time and resources for learning.
Despite encouraging results, it is still challenging to identify a strategy for automatic learning of an activation function that would solve different problems and significantly improve performance. Most trainable activation functions use the gradient descent method for hyperparameter estimation. The main limitations of these techniques lie in the computation time and gradient vanishing. This process prevents the network from learning deep features and may even lead to excessive processing capacity during training.
Indeed, neural networks can get stuck in local minima [55], which can harm the model's performance. This phenomenon is partly due to the gradient vanishing that occurs as derivatives decrease as the model deepens. This gradient decrease makes it harder to optimize the model, leading to a decrease in performance.
## III Problem statement
In the previous Section, we have examined different activation functions presented in the literature. In this paper, we introduce a modification of the MeLU activation function [56], while integrating the estimation of its parameters into a global Bayesian optimization framework. The choice of this function is mainly justified by its promising performance, as well as its form which promotes non-linearity and sparsity. However, the limits of the MeLU function are mainly related to memory requirements. This limit indicates that using this function may require a larger amount of memory compared to other activation functions, which can affect computational efficiency.
As a first contribution of this paper, the Modified Mexican ReLU (MMeLU) activation function is proposed to solve the complexity problem and improve model performance. The specificity of this new activation function is that it requires fewer parameters to estimate than MeLU. The second contribution is related to integrating the parameters estimation of the MMeLU function into a Bayesian optimizer [57], rather than using a standard optimization procedure such as the ADAM optimizer.
To define MMeLU, let
\[f_{\gamma,b}(x)=max(b-|x-\gamma|,0), \tag{8}\]
where \(\gamma\) and \(b\) are real numbers.
Using \(f_{\gamma,b}\), the proposed activation function MMeLU can be defined as
\[\textit{MMeLU}(x)=c\times f_{\gamma,b}(x)+(1-c)\times\textit{ReLU}(x) \tag{9}\]
where \(c\) is a real number belonging to the interval \([0,1]\). For the proposed MMeLU function, \(c\), \(b\), and \(\gamma\) are the parameters to be estimated.
The shape of the proposed MMeLU function, as well as those of competing functions ReLU, FReLU, PReLU, and MeLU, are illustrated in Figure 1. The activation function proposed in this paper is made up of a mixture of the ReLU and the Mexican hat functions. The curves in Figure 1((a)-(d)) clearly show the flexibility and non-linearity of our MMeLU function with different configurations of the parameters \(\gamma\), \(b\), and \(c\).
The Mexican hat function is often used as an activation function in neural networks due to its advantages. Mexican hat functions are continuous, which means that changes in inputs produce continuous changes in outputs. This is important in neural networks because it allows a gradual update of weights. Additionally, they have high representation capacity, which means that they can accurately model complex functions. This allows neural networks to learn non-linear relationships between inputs and outputs.
The Mexican hat function looks like a bell but with a peak in the center that makes it more pronounced for small values, as shown by the MMeLU curves in Figures 1 and 1. This means that for small input values, the Mexican hat function will have a stronger response than other activation functions, such as ReLU or FReLU. This behavior can be useful in certain situations, for example when the input data has a restricted value range or when the neural network's response needs to be more sensitive to small variations in the input data.
Let us now consider the convolutional neural network in Figure 2. The MMeLU activation function is applied after each convolutional layer, replacing the _ReLU_ function. This strategy has the potential to increase the non-linearity of the model and improve its ability to represent features in images. It is worth noting that when \(c\) is estimated close to 0 (\(c\sim 0\)), the MMeLU function tends to the ReLU behavior.
Regarding the model fitting, let us assume that the estimated label (or numerical value) is obtained by applying the proposed activation function MMeLU(x,\(W\)), where **x** is the input data and \(W\in\mathbb{R}^{N}\) denotes the weights vector. The model parameters (weight vector and parameters of the activation function) can be determined during the training phase using a generic error function \(D\) (Euclidean, Minkowski, etc.). Akin to [21], the target CNN is assumed to be sparse. We also use the same Bayesian formulation of the optimization problem. For the \(M\) input data, we can write
\[\widehat{W}=\arg\min_{W}\mathcal{L}(W)\] \[=\arg\min_{W,b,\gamma,c}\sum_{m=1}^{M}D(MMeLU(x^{m};W)-y^{m})+ \sum_{l=1}^{L}\lambda_{l}\|W^{l}\|_{1}, \tag{10}\]
where \(y^{m}\) is the ground truth for input data \(x^{m}\), \(L\) is the number of layers in the network, and \(\lambda_{l}\) is a regularization parameter to be estimated for layer \(l\) that balances the solution between the data attachment term and the \(\ell_{1}\) sparse regularization terms.
In the following Section, we formulate the adopted hierarchical Bayesian model to conduct the inference and fit the model weights and the activation function parameters.
## IV Hierarchical Bayesian Model
The problem of estimating the parameters of the MMeLU activation function is formulated within a Bayesian framework. In this sense, all parameters and hyperparameters are supposed to follow probability distributions. A likelihood is defined to model the relationship between the target weight vector, the activation function parameters, and the data. A prior distribution is defined to model the prior knowledge about the target weights and all activation function parameters.
### _Likelihood_
Following the principle of minimizing the error between the reference vector **y** (labels or continuous values) and its estimate \(\widehat{\textbf{y}}\), we define the likelihood distribution as
Fig. 1: Illustrations of ReLU, FReLU, PReLU, MeLU, and MMeLU curves with different configurations.
\[f(\mathbf{y},\mathbf{x};W,c,\gamma,b)\propto\] \[\prod_{m=1}^{M}\exp\left[-D;(MMeLU(x^{m};W,c,\gamma,b)-y^{m})\right]. \tag{11}\]
It is worth noting that when a Euclidean distance is used for \(D\), the adopted likelihood is nothing but a Gaussian distribution.
### _Priors_
In our model, the unknown parameters are grouped in the unknown vector \(\theta\) = {\(W\), \(c\), \(\gamma\), \(b\), \(\lambda\)}, where \(\lambda=\{\lambda_{1},\ldots,\lambda_{L}\}\).
**Prior for \(W\):**
To promote sparsity in the neural network, we use a Laplace distribution for the weight vector \(W\) akin to [21]:
\[f(W;\lambda)\propto\prod_{l=1}^{L}\prod_{k=1}^{K_{l}}\left[\frac{1}{\lambda_ {l}}\exp\left(-\frac{|W_{k}^{l}|}{\lambda_{l}}\right)\right], \tag{12}\]
where \(K_{l}\) is the number of weights in layer \(l\) of the network and \(\lambda_{l}\) is a parameter to be estimated.
**Prior for \(\lambda_{l}\):**
Since \(\lambda_{l}\in\mathbb{R}_{+}\), we chose to use an inverse gamma (IG) distribution:
\[f(\lambda_{l};\delta,\mu)=IG(\lambda_{l};\delta,\mu)\propto(\lambda_{l})^{-1- \delta}\exp\left(-\frac{\mu}{\lambda_{l}}\right), \tag{13}\]
where \(\delta\) and \(\mu\) are positive parameters that were fixed at \(10^{-3}\) to have a non-informative prior.
**Prior for \(c\):**
Regarding the parameter \(c\), we consider a uniform distribution over the interval \([0,1]\), denoted as
\[c\sim U_{[0,1]}(c). \tag{14}\]
**Prior for \(\gamma\) :**
Since \(\gamma\) is a real value, a Gaussian distribution is used as follows
\[f(\gamma;\sigma^{2})=\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{\gamma^{2 }}{2\sigma^{2}}\right), \tag{15}\]
where \(\sigma^{2}\) is a hyperparameter to be estimated.
**Prior for \(b\):**
Since \(b\) is a positive real number, an exponential distribution is used as follows:
\[f(b;\lambda_{b})\propto\begin{cases}\frac{1}{\lambda_{b}}\exp\left(-\frac{b}{ \lambda_{b}}\right);\text{if};b\geq 0\\ 0;\text{otherwise}.\end{cases} \tag{16}\]
where \(\lambda_{b}\) is a hyperparameter to be estimated. This prior penalizes large values of \(b\).
### _Hyperpriors_
Since \(\lambda_{b}\) and \(\sigma^{2}\) are positive real numbers, an inverse gamma (IG) distribution was used as a hyper-_a priori_:
\[f(\lambda_{b};\delta,\mu)=IG(\lambda_{b};\delta,\mu)\propto(\lambda_{b})^{-1- \delta}\exp\left(-\frac{\mu}{\lambda_{b}}\right) \tag{17}\]
and
\[f(\sigma^{2};\delta,\mu)=IG(\sigma^{2};\delta,\mu)\propto(\sigma^{2})^{-1- \delta}\exp\left(-\frac{\mu}{\sigma^{2}}\right), \tag{18}\]
where \(\delta\) and \(\mu\) are positive parameters that were fixed at \(10^{-3}\).
Fig. 2: A general diagram of a convolutional neural network with the MMeLU activation function.
## V Inference scheme
By adopting a Maximum _a Posteriori_ (MAP) approach, we first need to express the posterior distribution. Let \(\Phi_{e}\) be the hyperparameters to be estimated, represented by \(\Phi_{e}\) = \(\{\sigma^{2},\)\(\lambda_{b}\}\), and \(\Phi_{m}\) be the hyperparameters to be fixed, \(\Phi_{m}\) = \(\{\delta,\)\(\mu\}\). Using the likelihood, the prior distributions, and the defined hyperpriors, we can write the posterior distribution as:
\[f(\theta,\Phi_{e};y,\Phi_{m})\propto f(y;\theta)f(\theta;\Phi_{ e})f(\Phi_{e};\Phi_{m}) \tag{19}\]
which can be reformulated in a detailed version as
\[f(\theta,\Phi_{e};\mathbf{y},\mathbf{x},\Phi_{m})\propto\] \[\prod_{m=1}^{M}\exp\left[-D\ (MMeLU(x^{m};W,c,\gamma,b)-y^{m})\right]\times\] \[\prod_{l=1}^{L}\frac{1}{\lambda_{l}^{K_{l}}}\prod_{k=1}^{K_{l}} \left[\exp\left(-\frac{|W_{k}^{l}|}{\lambda_{l}}\right)\right]\times(\lambda_ {l})^{-1-\delta}\exp\left(-\frac{\mu}{\lambda_{l}}\right)\times\] \[\exp\left(-\frac{\gamma^{2}}{2\sigma^{2}}\right)\times\frac{1}{ \lambda_{b}}\exp\left(-\frac{b}{\lambda_{b}}\right)1_{\mathbb{R}_{+}}(b) \times 1_{[0,1]}(c)\times\] \[(\lambda_{b})^{-1-\delta}\exp\left(-\frac{\mu}{\lambda_{b}} \right)\times(\sigma^{2})^{-1-\delta}\exp\left(-\frac{\mu}{\sigma^{2}}\right).\]
One can clearly notice that the posterior in (IV-B) is complicated to deal with in order to derive close-form estimators. We, therefore, resort to numerical approximations using a Markov Chain Monte Carlo technique (MCMC) [17, 58]. Specifically, we use a Gibbs sampler to sequentially sample according to the conditional posteriors. To calculate the conditional distributions associated with each parameter of the model, one needs to integrate the joint posterior distribution in (IV-B) with respect to all the other parameters.
Regarding the parameter \(W\), calculations based on (IV-B) lead to the following form:
\[f(W;c,\gamma,b,\lambda)\propto\exp\left[-\sum_{l=1}^{L}\sum_{k=1 }^{K_{l}}\frac{|W_{k}^{l}|}{\lambda_{l}}\right]\times\] \[\exp\left[-\sum_{m=1}^{M}\left(D;(MMeLU(x^{m};W,c,\gamma,b)-y^{m })\right)\right]. \tag{21}\]
The conditional distribution for the parameter \(c\) is given by:
\[f(c;W,b,\gamma)\propto 1_{[0,1]}(c)\times\] \[\exp\left[-\sum_{m=1}^{M}\left(D;(MMeLU(x^{m};W,c,\gamma,b)-y^{m })\right)\right]. \tag{22}\]
For the parameter \(b\), the condition distribution is given by:
\[f(b;W,c,\gamma,\lambda_{b})\propto\exp\left(-\frac{b}{\lambda_{ b}}\right)\times\] \[\exp\left[-\sum_{m=1}^{M}\left(D;(MMeLU(x^{m};W,c,\gamma,b)-y^{m })\right)\right]. \tag{23}\]
As regards \(\gamma\), the conditional distribution writes:
\[f(\gamma;W,b,c,\sigma^{2})\propto\exp\left(-\frac{\gamma^{2}}{ 2\sigma^{2}}\right)\times\] \[\exp\left[-\sum_{m=1}^{M}\left(D(MMeLU(x^{m};W,c,\gamma,b)-y^{m })\right)\right]. \tag{24}\]
The conditional distribution for the parameter \(\lambda_{l}\) is given by:
\[f(\lambda_{l};\delta,\mu) \propto\lambda_{l}^{-1-(\delta+K_{l})}\exp\left(-\frac{\mu}{ \lambda_{l}}\right)\] \[\propto IG(\delta+K_{l},\mu). \tag{25}\]
For the hyperparameter vector \(\Phi_{e}\), it is necessary to calculate the conditional distributions from which it is possible to sample based on the likelihood and adopted priors.
The conditional distribution for the hyperparameter \(\lambda_{b}\) is given by:
\[f(\lambda_{b};,b,\mu,\delta) \propto\lambda_{b}^{-2-\delta}\exp\left(-\frac{b+\mu}{\lambda_{b}}\right)\] \[\propto IG(\delta+1,b+\mu) \tag{26}\]
The conditional distribution for the hyperparameter \(\sigma^{2}\) is given by:
\[f(\sigma^{2};\mu,\gamma,\delta) \propto(\sigma^{2})^{-1-\delta}\exp\left(-\frac{\gamma^{2}+2\mu}{ 2\sigma^{2}}\right)\] \[\propto IG(\delta,\gamma+2\mu). \tag{27}\]
The sampling scheme is summarized in Algorithm 1, where the model weights \(W\) and the parameters of the proposed MMeLU function are sampled.
```
Fix the hyperparameters \(\Phi_{m}\) ; for\(r=1,\ldots,S\)do * Sample \(c\) according to \(f(c;W,b,\gamma)\) ; * Sample \(\gamma\) according to \(f(\gamma;W,b,c,\sigma^{2})\) ; * Sample \(b\) according to \(f(b;W,c,\gamma,\lambda_{b})\) ; * Sample \(\sigma^{2}\) according to \(f(\sigma^{2};\mu,\gamma,\delta)\) ; * Sample \(\lambda_{b}\) according to \(f(\lambda_{b};b,\mu,\delta)\) ; * Sample \(\lambda_{l}\) according to \(f(\lambda_{l};\delta,\mu)\) \(\forall\ l\in\{1,\ldots,L\}\) ; * Sample \(W\) as described in [22] ; end for
```
**Algorithm 1**Gibbs sampler for the proposed method.
In Algorithm 1, \(S\) denotes the number of MCMC sampling iterations. After the burn-in period, the sampled coefficients are used to calculate the estimators \(\widehat{W}\), \(\widehat{c}\), \(\widehat{b}\), \(\widehat{\gamma}\), in addition to \(\widehat{\sigma^{2}}\), \(\widehat{\lambda}\) and \(\widehat{\lambda_{b}}\).
## VI Experimental validation
In order to validate the proposed method, three image classification experiments are conducted using different datasets: COVID-19 dataset including Computed tomography (CT) images for challenging classification [59], and two standard
datasets, namely Fashion-MNIST [60] and CIFAR-10 [61]. Table I illustrates the setting details of the different datasets.
To compare the proposed method with the state of the art, two types of optimizers are used: _i)_ our previous Bayesian optimizer that uses non-smooth Hamiltonian methods described in [22] with the standard ReLU activation function, and _ii)_ the standard Adam optimizer (with a learning rate of \(10^{-3}\)) with eight of the most well-known activation functions: ReLU, LReLU, ELU, PReLU, SeLU [62], swish, FReLU, and MeLU. As regards coding, we used python programming language with Keras and Tensorflow libraries on an Intel(R) Core(TM) i7-2720QM CPU 2.20GHZ architecture with 16 Go memory.
### _ConvNet models_
In this work, three CNN architectures are utilized. Similar to the LeNet model [63], the first one has two fully-connected and three convolutional (Conv-32, Conv-64, and Conv-128) layers (FC-64 and FC-softmax). The second architecture has nine convolutional (3XConv-32, 3XConv-64, and 3XConv-128) and three FC layers (FC-128, FC-64 and FC-softmax) which are organized similarly to VGG-Net [64]. These architectures are shown in Table II. The third model is a deeper CNN with 25 convolutional layers and 4 FC layers (for more information, see section VI-F). Each one uses convolutional layers with a stride size of 1 and \(3\times 3\) filters in addition to \(2\times 2\) max-pooling.
Deep neural networks are expanded with three regularizing strategies since they can easily overfit when trained on small datasets: Batch Normalization [65], \(\ell_{1}\) Regularization [66] and Dropout [67].
We chose to use different architecture depths in the experiments, mainly to test the ability of our proposed method to achieve better performance in the shallow model. In this sense, training with large and complex data can be expensive.
### _Sampling Results_
After using the proposed Bayesian optimization method to train the CNN models detailed above for the classification of Covid-19 CT images, we analyzed the convergence behavior. Figure 3 presents the sampling chains for the \(\gamma\), \(b\), and \(c\) parameters of the proposed MMeLU function (a-c), as well as the histograms of the corresponding samples (d-f). The sampling chains and histograms of the sampled coefficients confirm the good convergence properties of the designed Gibbs sampler. After a burn-in period of 350 iterations, the algorithm achieves stable convergence and exhibits a good mixing rate of the sampled chains.
Fig. 3: Sampling of parameters \(c\) (a,b), \(b\) (c,d), and \(\gamma\) (e,f): chains and histograms.
### _Experiment 1: COVID-19 classification using CT images_
This section examines the effectiveness of our approach in classifying Covid-19 infections from other pneumonia in CT data. The COVID-CT dataset1 includes 397 images that are negative for COVID-19 and 349 images that are positive for COVID-19, and belong to 216 patients. We used 566 images for the train and 180 images for the test.
Footnote 1: [https://www.kaggle.com/luisblanche/covidct](https://www.kaggle.com/luisblanche/covidct)
Table III presents compelling evidence that the proposed Bayesian method outperforms all other activation functions for both the \(CNN_{1}\) and \(CNN_{2}\) architectures. Moreover, except with respect to "ReLU + [22]", the computational time for convergence is shorter that all the other activation functions. With respect to our previous work ("ReLU + [22]"), it is worth noting that the proposed method performs better in terms of accuracy and loss value, which confirms the usefulness of the used trainable activation function. However, due to the use of additional parameters, the proposed method is approximately 15\(\%\) slower. These conclusions hold for both \(CNN_{1}\) and \(CNN_{2}\).
Notably, when regularization is employed, a considerable drop in performance is observed for all competing activation functions, which is largely attributable to the inherent difficulty of classifying CT images due to their content richness and the similarities between Covid-19 infection and other types of pneumonia.
Learning and test curves (accuracy and loss) illustrated in Figures 4 and 5 confirm the good behavior of the proposed method, which is not necessarily the case of the other competing models where a marked difference between the precision and loss curves can be noticed. For example, while the LReLU function introduces a negative bias that suppresses excessive activations, an inappropriate bias value can lead to underfitting. Similarly, although the ELU function allows negative activation, its exponential form can lead to an explosion of the activation value for large values of \(x\).
The Swish function is known to accelerate learning convergence, but it can also lead to overfitting by being more sensitive to outliers. Likewise, although the FReLU function can capture complex data patterns, it can also suffer from overfitting if the parameters are not well chosen.
These remarkable differences confirm the interest and efficiency of our MReLU function, which outperforms all competing activation functions in terms of accuracy and robustness to regularization.
### _Experiment 2: Fashion-MNIST image classification_
The learning performance of the competing activation function algorithms is assessed in this scenario using the standard _Fashion-MNIST_ dataset. A training set of 60,000 images is employed, with a test set of 10,000 images. Each example is a \(28\) grayscale image paired with a label from one of ten classes, with 7,000 images in each class. We used 48,000 images for the train set and 12,000 for the test set for model training.
Table IV reports the results obtained for the _Fashion-MNIST_ dataset. Our proposed method outperformed all other competing activation functions, achieving a minimum accuracy of 93% for both CNN models. Additionally, Table IV shows that the processing time for competing methods is greater than 150 minutes for \(CNN_{1}\), while only almost 112 minutes are required for our method. Similar conclusions hold with \(CNN_{2}\).
Moreover, our proposed MMeLU function showed similar global performance compared to "ReLU + [22]" for the two CNN models. One more advantage of MMeLU is its fully automatic estimation of parameters, which is not the case for ReLU + [22], where some parameters need to be manually set. Furthermore, our proposed method has demonstrated its ability to achieve remarkable performances in challenging cases, as shown in the first experiment.
### _Experiment 3: CIFAR-10 image classification_
In this experiment, the learning performance of the proposed model (made up of the trainable activation function and the Bayesian sparse optimization) is evaluated using the standard _CIFAR-10_ dataset. The CIFAR-10 dataset contains 60000 \(32\times 32\) color images divided into 10 classes with 6000 images per class. There are 50,000 training and 10,000 test images.
Table V presents the classification results for the _CIFAR-10_ dataset. It can be observed that the proposed Bayesian model performed well overall, even when multiple classes were used. In contrast, all competing activation functions (with Adam optimizer) had similar accuracy, around 88%, with a
loss rate almost double that of the proposed model for both architectures.
Moreover, the MMeLU activation function takes significantly less time for training than other activation functions + Adam, which often take a long time before convergence. For the competing functions when used with Adam, we noticed similar performance between the trainable and standard activation functions, with a slight superiority observed with the trainable function ELU.
The same conclusions can be drawn by examining the results of "ReLU + [21]" on this dataset. These findings suggest that the proposed MMeLU function provides better flexibility for the activation task, even for the multi-class case.
### _Experiment 4: Comparison on Deep CNN_
This section investigates the performance of our algorithm on the standard Fashion-MNIST dataset using a deep CNN. This CNN is claimed to be deeper than \(CNN1\) and \(CNN2\). It has 25 convolutional layers (5 X Conv3x3-32, 5 X Conv3x3-64, 5 X Conv3x3-128, 5 X Conv3x3-256 and 5 X Conv3x3-512) and four layers (FC-512, FC-256, Fc-128 and FC-softmax). All of them use convolutional layers with \(3\times 3\)
Fig. 4: Experiment 1: Train and test curves using \(CNN_{1}\) for all competing activation functions.
kernel filters as well as \(2\times 2\) max-pooling and stride size of 1.
The results demonstrate that our proposed MMeLU method maintains good performance on this deep architecture, as shown in table VI. The same conclusions can be drawn as from previous experiments regarding the superiority of MMeLU over competing activation functions, including our previous work in [22]. Interestingly, in addition to better scores in comparison to [22], the proposed method provides convergence time which is only 2 \(\%\) slower. It turns out that when the depth of the network increases (similarly the number of parameters), the additional cost to estimate the activation function parameters is no longer significant while reaching better performance.
## VII Conclusion
In this paper, we proposed a Bayesian approach for training sparse deep neural networks with trainable activation functions. Our method learns the weights and parameters of the proposed activation function directly from the data without any user configuration. Compared to competing algorithms, our
Fig. 5: Experiment 1: Train and test curves using \(CNN_{2}\) for all competing activation functions.
approach achieves promising results with better classification accuracy and high generalization performance, while also being computationally efficient, particularly in challenging cases. Our experiments on standard datasets such as Fashion-MNIST and CIFAR-10 demonstrate the precision and stability of our approach, as well as its general applicability.
Future work will focus on parallelizing the algorithm to enable GPU calculations and further reduce computational time.
|
2306.01354 | Deep recurrent spiking neural networks capture both static and dynamic
representations of the visual cortex under movie stimuli | In the real world, visual stimuli received by the biological visual system
are predominantly dynamic rather than static. A better understanding of how the
visual cortex represents movie stimuli could provide deeper insight into the
information processing mechanisms of the visual system. Although some progress
has been made in modeling neural responses to natural movies with deep neural
networks, the visual representations of static and dynamic information under
such time-series visual stimuli remain to be further explored. In this work,
considering abundant recurrent connections in the mouse visual system, we
design a recurrent module based on the hierarchy of the mouse cortex and add it
into Deep Spiking Neural Networks, which have been demonstrated to be a more
compelling computational model for the visual cortex. Using Time-Series
Representational Similarity Analysis, we measure the representational
similarity between networks and mouse cortical regions under natural movie
stimuli. Subsequently, we conduct a comparison of the representational
similarity across recurrent/feedforward networks and image/video training
tasks. Trained on the video action recognition task, recurrent SNN achieves the
highest representational similarity and significantly outperforms feedforward
SNN trained on the same task by 15% and the recurrent SNN trained on the image
classification task by 8%. We investigate how static and dynamic
representations of SNNs influence the similarity, as a way to explain the
importance of these two forms of representations in biological neural coding.
Taken together, our work is the first to apply deep recurrent SNNs to model the
mouse visual cortex under movie stimuli and we establish that these networks
are competent to capture both static and dynamic representations and make
contributions to understanding the movie information processing mechanisms of
the visual cortex. | Liwei Huang, ZhengYu Ma, Huihui Zhou, Yonghong Tian | 2023-06-02T08:25:58Z | http://arxiv.org/abs/2306.01354v1 | Deep recurrent spiking neural networks capture both static and dynamic representations of the visual cortex under movie stimuli
###### Abstract
In the real world, visual stimuli received by the biological visual system are predominantly dynamic rather than static. A better understanding of how the visual cortex represents movie stimuli could provide deeper insight into the information processing mechanisms of the visual system. Although some progress has been made in modeling neural responses to natural movies with deep neural networks, the visual representations of static and dynamic information under such time-series visual stimuli remain to be further explored. In this work, considering abundant recurrent connections in the mouse visual system, we design a recurrent module based on the hierarchy of the mouse cortex and add it into Deep Spiking Neural Networks (SNNs), which have been demonstrated to be a more compelling computational model for the visual cortex. Using a similarity metric, Time-Series Representational Similarity Analysis (TSRSA), we measure the representational similarity between networks and mouse cortical regions under natural movie stimuli. Subsequently, we conduct a comparison of the representational similarity across recurrent/feedforward networks and image/video training tasks. Trained on the video action recognition task, the recurrent SNN achieves the highest representational similarity and significantly outperforms the feedforward SNN trained on the same task by 15% and the recurrent SNN trained on the image classification task by 8%. Based on the similarity experiments, we further investigate how static and dynamic representations of SNNs influence the similarity, as a way to explain the importance of these two forms of representations in biological neural coding. Taken together, our work is the first to apply deep recurrent SNNs to model the mouse visual cortex under movie stimuli and we establish that these networks are competent to capture both static and dynamic representations and make contributions to understanding the movie information processing mechanisms of the visual cortex.
## 1 Introduction
When observing the natural world, the biological visual system receives not only static but also dynamic information and integrates this information in both spatial [22; 23] and temporal [19] dimensions to encode and transmit information and perform visual tasks. Although bottom-up (feedforward) connections dominate the processing and transmission of visual information [13; 9], top-down (feedback) and lateral connections, which are also widespread in the biological visual cortex, play a crucial role [16; 31; 47]. Recurrences in the visual cortex not only provide diverse coding mechanisms for the visual system and enrich the temporal representation protocol [51; 14; 24; 52], but also facilitate the performance in complex and challenging visual tasks [18; 49; 55; 56].
Deep neural networks have become the tool of choice in the visual neuroscience community [28; 32; 58], compared with traditional computational models. They have shown significant utility in investigating various aspects of the biological visual cortex, such as rivaling the neural representations [59; 5; 4; 7; 6; 40], revealing the functional hierarchy [17; 44; 8; 54; 45], and understanding the processing mechanisms [1; 10]. As research has progressed, more brain-inspired structures and mechanisms have been introduced into the neural networks to better model the visual cortex. On the one hand, incorporating recurrences such as lateral and feedback connections in networks effectively improves the ability of networks to capture more brain-like representations and behavioral patterns [39; 35; 29; 41]. On the other hand, spiking neural networks [37] with computational mechanisms more similar to the brain have been developed as a more biologically plausible alternative [20; 15; 25; 3]. [21] has demonstrated that deep SNNs outperform their counterparts of traditional ANNs on several neural datasets.
In this work, we combine deep SNNs and recurrences to exploit the biological potential of SNNs and design a series of experiments to compare different deep SNNs based on representational similarity, aiming to elucidate the importance of static and dynamic representations of the visual cortex under movie stimuli (Figure 1). We summarize our main contributions in four points as follows.
* We design a recurrent module for deep SNNs inspired by certain properties of SNN and the functional hierarchy of the mouse visual cortex. We incorporate this recurrent module into SEW-ResNet [12] and pretrain it on the UCF101 dataset. Notably, this network achieves the highest level of representational similarity to the mouse visual cortex under movie stimuli.
* By quantifying the effect of dynamic representations on representational similarity, we demonstrate that the recurrent module largely stimulates the ability of SNNs to extract temporal features and capture the dynamic representations of the mouse visual cortex.
* By quantifying the effect of static representations on similarity, we reveal that the recurrent SNN also captures static representations of the mouse visual cortex, and its ability to represent dynamic information improves its robustness to disparate static information.
* In the two quantification experiments above, the representational similarity drops significantly in the case that either static or dynamic information is corrupted, which shows that both static and dynamic representations are important components of the neuronal population responses in the mouse visual cortex.
Overall, to the best of our knowledge, we are the first to use deep recurrent SNNs to investigate the representations of the mouse visual cortex under movie stimuli. We provide computational evidence for the importance of static and dynamic representations in the mouse visual cortex, and that the recurrent SNNs we design are capable of representing both types of information well and become an effective and novel computational model for revealing the information processing mechanisms of the visual system.
## 2 Related Work
Modeling the visual cortex with recurrent ANNsSince recurrent connections are critical structures in the brain, [26; 27] provided physiological and computational evidence that recurrences contribute to the temporal properties of visual coding. Either by automated search or by manual design, some work has constructed novel networks with recurrent connections [39; 35; 29; 53], which not only better fit the neural responses to static image stimuli but also reveal the neural dynamics of the visual cortex. What's more, as for movie stimuli, [46; 48] accurately predicted the neural representations using recurrent neural networks. However, most work has focused on investigating neural representations to static stimuli, a limited aspect of naturalistic stimuli. The exploration of static and dynamic representations to movie stimuli is still scarce. Our work is dedicated to the analysis of static and dynamic representations in networks to shed light on visual processing mechanisms.
The SNNs with recurrent connectionsMost studies applied spiking neurons and recurrent connections in single-layer networks to perform some simple temporal tasks in the early days [30; 2]. Recently, [60; 42] introduced diverse recurrent connections within layers and within spiking neurons in multi-layer networks, yielding considerable performance on sequential tasks. Moreover, recurrent
SNNs have also been used to explain the regime in the brain associated with cognition and working memory [50; 57]. With recurrent SNN models, [36] emphasized the criticality of homeostatic regulation in biological neurons. Although these recurrent SNN models have made a large contribution to the study of the brain, they have been limited to shallow or even single-layer networks while focusing on some local and detailed biological properties. In contrast, our work is the first to add long-term feedback connections to deep SNNs and provide a comprehensive comparison with large-scale neuronal population representations.
## 3 Methods
### Neural dataset
In this work, we conduct analysis using a subset of the Allen Brain Observatory Visual Coding dataset [47]. This dataset, recorded by Neuropixel probes, consists of neural spikes with high temporal resolution from 6 mouse visual cortical regions (VISp, VISl, VISrl, VISal, VISpm, VISam). Each cortical region contains hundreds of recorded neurons to minimize the effects of individual neuronal variability, facilitating the analysis of neural population representations. The visual stimulus presented to mice is a 30-second natural movie with 900 frames and the stimulus is repeated for 20 times. For each neuron, we sum the number of spikes in each movie frame and take the average across all repeats as the neural response. What's more, we exclude neurons that fire less than 0.5 spikes/s.
### Deep spiking neural networks with recurrent connections
Although deep SNNs achieve higher representational similarity to biological visual neural responses compared to traditional ANNs, the experiments are all based on static stimuli [21]. Consequently, in order to better match the representations of the mouse visual cortex under movie stimuli, we introduce feedback connections in deep SNNs to enhance their temporal encoding capability and design a recurrent module suitable for them based on some hierarchical properties of the mouse cortex.
The recurrent module consists of three components: a _feedforward_ module (\(O_{t}^{l}\)), a _feedback_ module (\(R_{t}^{l}\)), and a _fusion_ module (\(A_{t}^{l}\)). The _feedforward_ module is a submodule of the backbone network that plays a main role in abstracting spatial features from visual stimuli and encoding the visual content. Unlike feedforward networks, where the submodule receives the outputs of the previous stage directly, the _feedforward_ module receives the fused features of the outputs from the _feedback_ module and the previous stage. The _feedback_ module is composed of depthwise transposed convolution, batch normalization, and spiking neurons. On the one hand, depthwise transposed convolution effectively
Figure 1: The overview of our experiments. Six mouse visual cortex and recurrent spiking neural networks receive the same movie stimuli to generate the representation matrices. The metric TSRSA is applied to two representation matrices to quantify the representational similarity. In addition, two more experiments are used to quantify the effect of static and dynamic representations on similarity.
reduces the number of network parameters and upsamples the feature map to match the inputs. On the other hand, some work demonstrated that such a structure might mimic parallel information processing streams in mouse cortical regions and improves the representational similarity between the model and visual cortex [21]. The _fusion_ module first concatenates the inputs of the current module and the outputs of the _feedback_ module in the channel dimension, and then integrates the feedforward and feedback information through pointwise convolution, batch normalization, and spiking neurons. The recurrent module can be formulated as:
\[R_{t}^{l} =\mathrm{SN}(\mathrm{BN}(\mathrm{DW}(O_{t-1}^{l}))), \tag{1}\] \[A_{t}^{l} =\mathrm{SN}(\mathrm{BN}(\mathrm{PW}(\mathrm{CONCAT}(I_{t}^{l},R_ {t}^{l})))),\] (2) \[O_{t}^{l} =\mathrm{F}^{l}(A_{t}^{l}), \tag{3}\]
where \(\mathrm{SN}\) is spiking neurons, \(\mathrm{BN}\) is batch normalization, \(\mathrm{DW}\) is depthwise transposed convolution, \(\mathrm{PW}\) is pointwise convolution, and \(\mathrm{F}^{l}\) denotes all operations in the _feedforward_ module. \(O_{t}^{l}\) is not only the outputs of _feedforward_ module, but also the outputs of the entire recurrent module.
In this work, we use SEW-ResNet18 [12] as the backbone of our networks and apply recurrent connections to each stage of SEW-ResNet. For controlled experiments, networks with and without recurrent connections are all pretrained on both the ImageNet dataset and the UCF101 dataset by SpikingJelly [11]. Specifically, as for the training of the object recognition task on the ImageNet, each sample (an image) is input into SNNs for 4 times (the simulating time-steps \(T=4\)). As for the training of the video action recognition task on the UCF101, each sample (a video clip) contains 16 frames and one frame is input at each time step (the simulating time-steps \(T=16\)).
In order to extract network representations for comparison with the mouse visual cortex, we feed the same movie used in the neural dataset to pretrained SNNs and obtain features from all selected layers. It is worth noting that each movie frame is considered as a separate image to be fed to ImageNet-pretrained SNNs for 4 times (consistent with the training procedure), while the entire movie is continuously and uninterruptedly fed to UCF101-pretrained SNNs.
### Analysis of representation
#### 3.3.1 Representational similarity metric
To analyze the representational similarity between neural networks and the mouse visual cortex at the population level under time-series stimuli, we should solve two main problems. Firstly, although the Allen Brain Observatory Visual Coding dataset is a massive dataset, neurons recorded in a cortical region are far fewer than units of a network layer's feature maps, making it difficult to
Figure 2: SEW-ResNet (the left), R-SEW-ResNet (the centre), and the recurrent module (the right).
directly compare representations of two systems. Secondly, we should use a metric that not only analyzes static properties in representations, but also preserves sequential relationships of time-series representations and analyzes dynamic properties. Here, we use the Time-Series Representational Similarity Analysis (TSRSA) based on Representational Similarity Analysis (RSA) [34, 33] which has been widely used for the comparison of neural representations [29, 38, 44, 1, 7]. The original RSA focuses on the similarity between the neural representations to each pair of independent stimuli, whereas TSRSA quantifies the similarity among representations to time-series stimuli, taking into account temporal sequential relationships. We summarize the implementation of TSRSA as follows. First, through the preprocessing procedure for the neural dataset and the feature extraction procedure for neural networks, we obtain representation matrices \(R\in\mathbb{R}^{N\times M}\) from every layer of networks and every cortical region, where \(N\) is the number of units/neurons and \(M\) is the number of movie frame stimuli in chronological order. The column \(r_{t}\) of the representation matrix represents population responses to the movie frame \(t\). Secondly, for each column, using the Pearson correlation coefficient, we calculate the similarity between it and all subsequent columns one by one, yielding the representational similarity vector \(s_{t}\). The element \(s_{tp}\) of the similarity vector is \(Corr(r_{t},r_{t+p})\), where \(0<p<M-t\). Subsequently, we concatenate all vectors to obtain the complete representational similarity vector, which characterizes both the static and dynamic representations carried by a network layer or a cortical region. Finally, The Spearman rank correlation coefficient is used to quantify the similarity between two given vectors, which is also regarded as the representational similarity between two systems.
Using this metric, we apply a layer-by-layer comparison between a network and the mouse visual cortex, yielding the representational similarity score between each layer and each region. As for a network and a cortical region, we take the maximum score across network layers as the level of representational similarity between them. As for a network and the mouse visual cortex, we take the average similarity across all cortical regions as the final similarity.
#### 3.3.2 Quantification of the effect of dynamic representations on representational similarity
To explore the impact of dynamic representations on the representational similarity between SNNs and the mouse visual cortex, we disrupt the frame order of the movie used in the neural dataset, feed the shuffled movie into SNNs, reorder the frames back together with their corresponding network responses, and calculate the representational similarity while the stimuli are aligned in the representation matrices of the mouse and networks. As a result, the obtained network features differ from the original ones due to distinct dynamic sequential information. In order to obtain frame order with different levels of chaos while avoiding extreme chaos, such as the original first frame being shifted to the last frame, we divide the entire movie into multiple windows with the same number of frames and randomly shuffle the frames only within each window. Considering that the entire movie contains 900 frames, we conduct 10 sets of experiments, and the number of frames per window used in each set is 10, 20, 30, 40, 50, 80, 100, 150, 200 and 300 respectively. Each set comprises 10 trials to gain enough statistical power. What's more, we calculate the level of chaos for every trial, which is defined as \(1-r\), where \(r\) is the Spearman rank correlation coefficient between the disrupted frame order and the original frame order. Obviously, even in the same set of experiments, the level of chaos may vary across trials.
Also, although the network is fed a movie with shuffled frames to extract features, the representation matrix of the network is rearranged to the original frame order to ensure that it matches the order of the mouse representation matrix. In this way, we maintain the correspondence between the static representations of two systems, while modifying the dynamic representations of the network. This allows us to focus specifically on evaluating the effect of dynamic information on representational similarity.
#### 3.3.3 Quantification of the effect of static representations on representational similarity
In addition to analyzing the dynamic representations' impact, we also investigate the effect of static representations. Firstly, some movie frames are randomly selected and replaced with Gaussian noise images. Then, we feed the new movie into SNNs to obtain different features from the original movie. Similar to the experiments described in Section 3.3.2, the movie is divided into multiple windows with the same number of frames, and then we randomly replace one frame in each window with a noise image to avoid dense local replacement and to preserve as much dynamic information as
possible before and after the replaced frames. There are also 10 sets of experiments, each with 10 trials. The number of frames per window used in each set is 5, 10, 20, 30, 50, 60, 90, 100, 150 and 300 respectively. The ratio of replacement is the inverse of the number of frames per window.
Replacing movie frames results in a change in the static representations of the network, while the overall frame order remains the same as the original movie, allowing the network to maintain the original dynamic representations. Admittedly, noise images may lead to more disparate dynamic representations when the ratio of replacement is large. We attenuate this influence by sporadically distributing the noise images as much as possible and emphasis on how the static representations affect the representational similarity.
## 4 Results
### Comparisons of representational similarity
We compare the representational similarity across different network structures and different training tasks (Figure 3). For SNNs trained on the ImageNet, SEW-ResNet18 and R-SEW-ResNet18 achieve comparable representational similarity across all mouse cortical regions, which suggests that the recurrent module does not have a significant effect on the representations of networks trained on static images. However, when trained on the UCF101, recurrent SNNs perform consistently better than feedforward SNNs trained on the same task and all SNNs trained on the ImageNet. Specifically, R-SEW-ResNet18 trained on the UCF101 outperforms SEW-ResNet18 trained on the same dataset by 15% on average across all cortical regions and outperforms R-SEW-ResNet18 trained on the ImageNet by 8%. On the one hand, these results show that recurrent SNNs are better at extracting temporal features than feedforward SNNs when trained on a movie dataset. On the other hand, training with a movie dataset results in richer dynamic representations of recurrent SNNs compared to training with an image dataset, which is effective in improving the representational similarity with
Figure 4: The curves of representational similarity with SNN layer depth. The SNN layer depth is normalized from 0 (the first layer) to 1 (the last layer). To obtain smoother curves, we apply the cubic spline interpolation to calculate the similarity on 50 discrete depth points.
Figure 3: The representational similarity of SNNs for six mouse cortical regions. Each bar indicates the score of an SNN pretrained on a given dataset. The prefix “R” indicates that the SNN is embedded with the recurrent module.
the visual cortex. Nevertheless, feedforward SNNs trained on the UCF101 instead perform worse than those trained on the ImageNet. One explanation is that feedforward SNNs trained on a movie dataset not only fail to effectively capture dynamic information, but also compromise the ability to characterize static information. Furthermore, SEW-ResNet50 with the recurrent module yields the same positive effect (see Appendix).
We further analyze the curves of representational similarity across the SNN layers (Figure 4). It can be seen that the similarity curves of each network are very similar across six cortical regions, suggesting that the functional hierarchy of the mouse visual cortex may be organized in parallel, in line with the findings of some previous work [44, 21]. As for SNNs trained on the ImageNet, the similarity keeps trending upwards, reaching the maximum in almost the last layer. Differently, the similarity of R-SEW-ResNet18 trained on the UCF101 reaches the maximum in the middle layers and then decreases. As the results show, high-order features of SNNs trained on the ImageNet have similar static representations for neural responses of the mouse visual cortex under movie stimuli, while the middle-order features of recurrent SNNs trained on the UCF101 better capture both static and dynamic representations. Unexpectedly, the similarity of SEW-ResNet18 trained on the UCF101 shows a weird curve, peaking in the early layers, then dropping to a trough, and gradually rising again. Combined with the poor performance, SEW-ResNet18 trained on the UCF101 is bad at capturing the static and dynamic representations of the mouse visual cortex and is unable to match the functional hierarchy.
Taken together, the results suggest that static and dynamic representations are indeed included in the population responses of the mouse visual cortex to movie stimuli. The ability of feedforward SNNs to extract temporal features is far from adequate when relying only on the dynamics of spiking neurons, while the recurrent module endows SNNs with the greater capability to extract temporal features. What's more, after pretraining on the UCF101, recurrent SNNs capture both static and dynamic representations as well as achieve the highest representational similarity to the mouse visual cortex.
Figure 5: The curves of representational similarity (the main plot) and the curves of drop rate of the experimental similarity compared to the original similarity (the subplot). The horizontal coordinates of both plots are the level of chaos. In the main plot, the dashed horizontal lines are the representational similarity between SNNs and the mouse visual cortex under the original frame order. Each set of experiments contains 10 trials and each trial is indicated by one point. There is one ellipse for each set, and the height and width of the ellipse indicate the 95% confidence interval of the similarity and the level of chaos across 10 trials, respectively. The curves pass the average point of each set. In the subplot, the curves show the average drop rate and the average level of chaos across 10 trials for all experiment sets.
### Effect of dynamic representations on representational similarity
To investigate the importance of dynamic representations on representational similarity, following Section 3.3.2, we compare the experimental results of SEW-ResNet18 and R-SEW-ResNet18 trained on the UCF101 (Figure 5). As the curves in the main plot show, the representational similarity between SNNs and the mouse visual cortex is lower than the original similarity whenever the frame order is disrupted and the similarity decreases as the level of chaos increases. The order shuffling makes the movie frames discontinuous and breaks the original temporal relationships, leading to an alteration in the dynamic representations of SNNs. Considering that in TSRSA we align the frame stimuli of the network and visual cortex representation matrices, the decrease in similarity is mostly caused by the dissimilar dynamic representations and is not related to the static representations.
Furthermore, comparing the curves of drop rate between SEW-ResNet18 and R-SEW-ResNet18, we find out that the drop rate of R-SEW-ResNet18 increases with the level of chaos and eventually reaches a staggering 45.8%. However, the drop rate of SEW-ResNet18 is consistently lower than that of R-SEW-ResNet18 and the maximum is even less than 9%. These results reveal two significant findings. Firstly, the mouse visual cortex does not view movie frames as independent visual stimuli, and its neuronal population responses contain copious dynamic representations, rather than being dominated by static representations. R-SEW-ResNet18 trained on the UCF101 captures biological dynamic representations very well and is sensitive to the dynamic information. Even small perturbations on temporal features can cause large changes in its representations. Secondly, although SEW-ResNet18 is trained on a movie dataset, the vast majority of its representations depend on static information. Therefore, its representational similarity is not much affected when the temporal relationships of the input stimuli are ruined. In conclusion, the recurrent module greatly improves the ability of SNNs to capture temporal features and may provide new insights into the movie information processing mechanisms of the biological visual system.
### Effect of static representations on representational similarity
In addition to the importance of dynamic representations, the role of static representations in representational similarity should not be overlooked. We follow Section 3.3.3 to compare the representational similarity of R-SEW-ResNet18 trained on the ImageNet and the UCF101 (Figure 6). Similar to
Figure 6: The curves of representational similarity (the main plot) and the curves of drop rate of the experimental similarity compared to the original similarity (the subplot). The horizontal coordinates of both plots are the ratio of replacement. In the main plot, the dashed horizontal lines are the representational similarity between SNNs and the mouse visual cortex under the original movie. The error bar is the 95% confidence interval of the similarity across 10 trials. Except for these elements, the representations of other elements in the plots are the same as those in Figure 5.
the results in Section 4.2, the curves in the main plot show that the representational similarity between SNNs and the mouse visual cortex is mostly lower than the original similarity when some frames are replaced with noise images and the similarity decreases as the ratio of replacement increases. Obviously, the static representations of SNNs change a lot due to the totally different spatial features between the original movie frames and the noise images, resulting in the decrease of the representational similarity. What's more, the drop rate of R-SEW-ResNet18 trained on the UCF101 is slightly higher than that of R-SEW-ResNet18 trained on the ImageNet at low ratios of replacement, while the opposite result is observed at high ratios of replacement. The maximum drop rate of R-SEW-ResNet18 trained on the ImageNet is 62.4% while the maximum of that trained on the UCF101 is 53.7%, which could be explained by two possible reasons. On the one hand, the network trained on an image dataset treats the movie frames as independent individuals, so the network representations completely depend on static information and there is no temporal correlation between the representations of two frames. When the ratio of replacement is high, the corrupted static information leads to a large drop in the representational similarity. On the other hand, the network trained on a movie dataset represents both the static and dynamic information. Consequently, although the high ratio of replacement also damages its static representations, its dynamic representations to natural movie frames may moderate the drop of the representational similarity to some extent. In summary, the recurrent SNN trained on a movie dataset is able to capture both the static and dynamic representations of the mouse visual cortex, and the ability to represent dynamic information may make it preserve the brain-like representations when receiving corrupted static information.
## 5 Discussion
In this work, we introduce long-term feedback connections into deep spiking neural networks for the first time and use such SNNs to model the mouse visual cortex under movie stimuli, showing that deep recurrent SNNs trained on movies perform best on the representational similarity. We extend the analysis to both the static and dynamic representations of networks and the visual cortex with two meticulously designed experiments, providing computational evidence that the static and dynamic information of natural visual input is well encoded and transmitted by the neuronal population responses, and that recurrent SNNs is able to capture such representations. Specifically, in the movie frame order disruption experiment, we observe that the feedforward SNN not only has a lower representational similarity than the recurrent SNN, but also is almost unaffected by the disrupted frame order. The results demonstrate that the recurrent module effectively promotes SNNs' faculty of extracting temporal features, which facilitates the modeling of visual cortex representations. On the other hand, in the noise image experiment, we find out that the ability to represent the dynamic information may help recurrent SNNs mitigate the ruin of representations when the spatial information of visual stimuli is compromised. These findings aid us in better understanding the mechanisms of information processing and representation in the mouse visual system.
Biological studies suggest that the effect of recurrent connections in the visual cortex is variant across time, which may be a key factor acting on dynamic representations. [43] discovered that the neuronal population interactions between cortical regions are feedforward-dominated shortly after stimulus onset while feedback-dominated during spontaneous activity. As our work shows, deep recurrent SNNs are an excellent candidate for modeling the visual cortex and have the potential to explore the effects of feedforward and feedback connections on dynamic representations in biological neural responses.
In conclusion, with recurrent connections, deep SNNs yield the more powerful ability to represent spatio-temporal features more like the brain and become a new paradigm for studying both static and dynamic information processing in the biological visual system. |
2305.11417 | Exploring the Complexity of Deep Neural Networks through Functional
Equivalence | We investigate the complexity of deep neural networks through the lens of
functional equivalence, which posits that different parameterizations can yield
the same network function. Leveraging the equivalence property, we present a
novel bound on the covering number for deep neural networks, which reveals that
the complexity of neural networks can be reduced. Additionally, we demonstrate
that functional equivalence benefits optimization, as overparameterized
networks tend to be easier to train since increasing network width leads to a
diminishing volume of the effective parameter space. These findings can offer
valuable insights into the phenomenon of overparameterization and have
implications for understanding generalization and optimization in deep
learning. | Guohao Shen | 2023-05-19T04:01:27Z | http://arxiv.org/abs/2305.11417v3 | # Complexity of Feed-Forward Neural Networks from the Perspective of Functional Equivalence
###### Abstract
In this paper, we investigate the complexity of feed-forward neural networks by examining the concept of functional equivalence, which suggests that different network parameterizations can lead to the same function. We utilize the permutation invariance property to derive a novel covering number bound for the class of feed-forward neural networks, which reveals that the complexity of a neural network can be reduced by exploiting this property. Furthermore, based on the symmetric structure of parameter space, we demonstrate that an appropriate strategy of random parameter initialization can increase the probability of convergence for optimization. We found that overparameterized networks tend to be easier to train in the sense that increasing the width of neural networks leads to a vanishing volume of the effective parameter space. Our findings offer new insights into overparameterization and have significant implications for understanding generalization and optimization in deep learning.
## 1 Introduction
Feed-forward neural networks, particularly deep and wide ones, have demonstrated remarkable success in various applications widely in the fields of machine learning and artificial intelligence. However, one of the major challenges in understanding the success is to explain their ability to generalize well, even if they are very large and have the potential to overfit the training data(Neyshabur et al., 2014, 2017; Razin and Cohen, 2020).
Theoretical studies have suggested that the generalization error can be related to the complexity, approximation power, and optimization properties of deep neural networks. Larger neural networks are proved to possess better approximation power (Yarotsky, 2017; Lu et al., 2021; Zhou, 2020), but may exhibit larger complexity and generalization gaps (Bartlett et al., 2017; Mohri et al., 2018; Bartlett et al., 2019), and can be more challenging to optimize (Glorot et al., 2011). However, some aspects of deep learning initially appeared to contradict common sense: overparameterized networks tend to be easier to train (Frankle and Carbin, 2019; Allen-Zhu et al., 2019; Du et al., 2019) and exhibit better generalization (Belkin et al., 2019; Neyshabur et al., 2019; Novak et al., 2018). Although the model class's capacity was immense, deep networks did not tend to overfit (Zhang et al., 2017).
Recent studies have highlighted that the functional form of neural networks may be less complex than their parametric form Bui Thi Mai and Lampert (2020); Stock and Gribonval (2022); Grigsby et al. (2022). In other words, the parameterization of neural networks is redundant, as networks with different parameters may implement the same function. This insight provides us with a fresh perspective for reconsidering how overparameterization truly affects the generalization and optimization.
In this work, we quantitatively characterize the redundancy in the parameterization of feedforward neural networks and derive a complexity measure for these networks based on functional equivalence. We analyze the results to gain insights into generalization and optimization in deep learning.
### Related work
The issue of redundancy or identification of parameterization of neural networks has been noted since 1990 in Hecht-Nielsen (1990). Subsequent studies for neural networks with Tanh and sigmoid activation functions(Chen et al., 1993; Fefferman and Markel, 1993; Kurkov'a and Kainen, 1994) have proved that given the input-output mapping of a Tanh neural network, its architecture can be determined and weights are identified up to permutations and sign flips. In recent years, the identifiability of parameterization in deep neural networks, particularly ReLU neural networks, has received considerable attention (Elbrachter et al., 2019; Bui Thi Mai and Lampert, 2020; Bona-Pellissier et al., 2021; Dereich and Kassing, 2022; Stock and Gribonval, 2022; Grigsby et al., 2022, 2022). Most recently, Bui Thi Mai and Lampert (2020) demonstrated that ReLU networks with non-increasing widths are identifiable up to permutation and scaling of weight matrices. With redundant parameterization, the weight space of deep neural networks can exhibit symmetries structures, which leads to implications for optimization(Neyshabur et al., 2015; Badrinarayanan et al., 2015; Stock et al., 2019). These studies suggest that naive loss gradient is sensitive to reparameterization by scaling, and proposed alternative, scaling-invariant optimization procedures. In addition, the redundancy or identification properties of neural networks are also closely related to the study of inverse stability (Elbrachter et al., 2019; Rolnick and Kording, 2020; Bona-Pellissier et al., 2021; Petersen et al., 2021; Stock and Gribonval, 2022), which investigates the possibility for one to recover the parameters (weights and biases) of a neural network.
The complexity of neural networks as input-output functions based on their parameterization redundancy has received relatively little attention. To the best of our knowledge, most relevant studies are Grigsby et al. (2022, 2022). Grigsby et al. (2022) studied the their newly defined local and global notions of topological complexity for fully-connected feedforward ReLU neural network functions. Grigsby et al. (2022) defined the functional dimension of ReLU neural networks based on perturbations in parameter space, and studied the their functional redundancy and when the functional dimension achieves its theoretical maximum. However, these results on functional dimension are not applicable to convert into generalization error bounds for deep learning algorithms.
The complexity of a class of functions characterizes how rich it is and is a core quantity closely related to generalization error, where larger complexities lead to larger generalization error (Bartlett et al., 2017; Mohri et al., 2018). Complexity upper bounds for deep neural networks in different measurements have been studied, including Rademacher complexity (Neyshabur et al., 2015; Golowich et al., 2018; Li et al., 2018), VC-dimension and Pseudo dimension (Baum and Haussler, 1988; Goldberg and Jerrum, 1993; Anthony et al., 1999; Bartlett et al., 2019), and covering number (Anthony et al., 1999; Neyshabur et al., 2017; Bartlett et al., 2017; Lin and Zhang, 2019). These measurements characterize how complex the class of neural networks (as functions) is and are expressed in terms of and increasing in the hyperparameters of the neural architecture, such as network depth, width, number of weights and bias vectors, and corresponding norm bounds. These bounds are not directly comparable to each other in magnitude, though they are closely related and can be converted to make a comparison (Anthony et al., 1999; Mohri et al., 2018).
### Our contributions
In this work, we quantitatively characterize the redundancy in the parameterization of feedforward neural networks, and derive new covering number upper bounds for these networks based on functional equivalence. We summarize our contributions as following aspects:
1. We make use of the permutation equivalence property to firstly obtain a tighter upper bound on the covering number. Surprisingly, we found that bias vectors are "for free" in the sense that the increased complexity that comes with adding bias vectors to neural networks is canceled by taking permutation invariance into account.
2. We improve existing covering number bounds in the sense that our results hold for neural networks with bias vectors and general activation functions. Since bias terms are indispensable for the approximation power of neural networks, our results are useful in both theory
and practice. Additionally, we express our bound explicitly in terms of the width, depth, size of the network, and the norm of the parameters, instead of the spectral norm of the weight matrices, which is usually unknown in practice.
3. We discuss the implications of our findings for understanding generalization and optimization. In particular, we found that using the same strategy of random initialization for weights and biases within a layer can reduce the complexity of optimizations. And we found that overparameterized networks tend to be easier to train in the sense that increasing the width of neural networks leads to a vanishing volume of the effective parameter space.
The remainder of the paper is organized as follows. In section 2, we introduce the concept of functional equivalence and investigate the permutation invariance property of general feedforward neural networks. In section 3, we derive novel covering number bounds for shallow and deep feedforward neural networks by exploiting the permutation invariance property and compare our results with existing ones. In section 4, we demonstrate that permutation invariance can help theoretically reduce the optimization complexity in deep learning and discuss the implications of our results on generalization and optimization under the framework of empirical risk minimization. Finally, we discuss the limitations of this study and future research directions in section 5. All technical proofs are included in the supplementary materials.
## 2 Functionally equivalent Feedforward Neural Networks
A feedforward neural network is a fully connected artificial neural network consisting of multiple layers of interconnected neurons. The network's architecture can be expressed as a composition of linear transformations and activations. The functional form of an \(L\)-layer feedforward neural network is determined by its weight matrices, bias vectors, and activation functions:
\[f(x;\theta)=\mathcal{A}_{L+1}\circ\sigma_{L}\circ\mathcal{A}_{L}\circ\cdots \circ\sigma_{2}\circ\mathcal{A}_{2}\circ\sigma_{1}\circ\mathcal{A}_{1}(x). \tag{1}\]
Here, \(\mathcal{A}_{l}(x)=W^{(l)}x+b^{(l)}\) is the linear transformation for layer \(l\), where \(W^{(l)}\) and \(b^{(l)}\) are the weight matrix and bias vector respectively. The activation function \(\sigma_{l}\) is applied element-wise to the output of \(\mathcal{A}_{l}\), and can be different across layers. The collection of weight matrices and bias vectors is denoted by \(\theta=(W^{(1)},b^{(1)},\ldots,W^{(L+1)},b^{(L+1)})\). The input \(x\) is propagated through each layer of the network to produce the output \(f(x;\theta)\).
The parameterization of a feedforward neural network can be redundant, with different parameter sets producing identical function outputs. This redundancy arises due to the non-identifiability of certain weight matrices or activation functions.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Perspective & Estimation Error & Optimization Error & Approximation Error \\ \hline Improve & & & **?** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Suggested improvement by this study in understanding generalization errors from different perspectives under empirical risk minimization framework.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Paper & Complexity & Explicit & \begin{tabular}{c} Allow Bias \\ Vectors \\ \end{tabular} & \begin{tabular}{c} General \\ Activations \\ \end{tabular} &
\begin{tabular}{c} Permutation \\ Invariance \\ \end{tabular} \\ \hline Bartlett et al. (2017) & \(B_{x}^{2}(\bar{\rho}\bar{s})^{2}\mathcal{U}\log(W)/\epsilon^{2}\) & & & \\ Neyshabur et al. (2017b) & \(B_{x}^{2}(\bar{\rho}\bar{s})^{2}\mathcal{S}^{2}\log(WL)/\epsilon^{2}\) & & & \\ Lin and Zhang (2019) & \(B_{x}(\bar{\rho}\bar{s})\mathcal{S}^{2}L/\epsilon\) & & & \\ Bartlett et al. (2019) & \(L\mathcal{S}\log(\mathcal{S})\log(B_{x}/\epsilon)\) & & & \\ This paper & \(L\mathcal{S}\log(\bar{\rho}\bar{s}B_{x}^{1/L}/(d_{1}\cdots d_{L}\epsilon)^{1/L})\) & & & \\ \hline \hline \end{tabular}
* Notations: \(\mathcal{S}\), total number of parameters; \(\mathcal{U}\), number of hidden neurons; \(L\) number of hidden layers; \(W\), maximum width of hidden layers; \(B_{x}\), L2 norm of input; \(\bar{\rho}=\Pi_{j=1}^{L}\rho_{j}\), products of Lipschitz constants of activation functions; \(\bar{s}=\Pi_{j=1}^{L}s_{j}\), products of spectral norms of hidden layer weight matrices; \(\epsilon\), radius for covering number.
\end{table}
Table 2: A comparison of recent results on the complexity of feedforward neural networks.
**Definition 1** (Functionally-Equivalent Neural Networks).: _Two neural networks \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) are said to be functionally-equivalent if they produce the same input-output function for all possible inputs, i.e.,_
\[f_{1}(x;\theta_{1})=f_{2}(x;\theta_{2})\quad\forall x\in\mathcal{X}, \tag{2}\]
_where \(\mathcal{X}\) is the input space and \(\theta_{1}\) and \(\theta_{2}\) denote the sets of parameters of the two networks, respectively._
Neural networks with a fixed architecture can have functionally-equivalent versions through weight scaling, sign flips, and permutations. This can even occur across networks with different architectures. In this paper, we focus on the complexity of a specific class of neural networks with fixed architecture but varying parameterizations. We provide examples of functionally-equivalent shallow neural networks to illustrate this concept.
**Example 1** (Scaling).: _Consider two shallow neural networks parameterized by \(\theta_{1}=(W_{1}^{(1)},b_{1}^{(1)},W_{1}^{(2)},b_{1}^{(2)})\) and \(\theta_{2}=(W_{2}^{(1)},b_{2}^{(1)},W_{2}^{(2)},b_{2}^{(2)})\), defined as:_
\[f(x;\theta_{1})=W_{1}^{(2)}\sigma(W_{1}^{(1)}x+b_{1}^{(1)})+b_{1}^{(2)}\quad \mathrm{and}\quad f(x;\theta_{2})=W_{2}^{(2)}\sigma(W_{2}^{(1)}x+b_{2}^{(1)}) +b_{2}^{(2)}\]
_respectively, where \(x\in\mathbb{R}^{n}\) is the input to the network and \(\sigma\) satisfies_
\[\sigma(\lambda x)=\lambda\sigma(x)\]
_for all \(x\in\mathbb{R}^{n}\) and \(\lambda>0\). If there exists a scalar value \(\alpha>0\) such that:_
\[(W_{2}^{(1)},b_{2}^{(1)})=(\alpha W_{1}^{(1)},\alpha b_{1}^{(1)})\qquad \mathrm{and}\qquad W_{2}^{(2)}=\frac{1}{\alpha}W_{1}^{(2)},\]
_then the neural networks \(f(\cdot;\theta_{1})\) and \(f(\cdot;\theta_{2})\) are functionally equivalent._
Scaling invariance property is applicable to ReLU, Leaky ReLU, and piecewise-linear activated neural networks. Specifically, for all \(x\in\mathbb{R}^{n}\) and \(\lambda\geq 0\), we have \(\sigma(\lambda x)=\lambda\sigma(x)\) for \(\sigma\) being the ReLU or Leaky ReLU function. It is worth noting that the above example is presented for shallow neural networks, but the scaling invariance property can be extended to deep neural networks across any two consecutive layers.
**Example 2** (Sign Flipping).: _Consider two shallow neural networks \(f(\cdot;\theta_{1})\) and \(f(\cdot;\theta_{2})\) defined in Example 1 with \(\sigma\) being an odd function, that is \(\sigma(-x)=-\sigma(x)\) for all \(x\in\mathbb{R}^{n}\). If_
\[(W_{2}^{(1)},b_{2}^{(1)})=(-W_{1}^{(1)},-b_{1}^{(1)})\qquad\mathrm{and} \qquad W_{2}^{(2)}=-W_{1}^{(2)},\]
_then the neural networks \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) are functionally equivalent._
Sign flipping invariance property can happen for neural networks activated by Tanh, Sin and odd functions. The sign flipping invariance property can also be generalized to deep neural networks across any two consecutive layers.
**Example 3** (Permutation).: _Consider two shallow neural networks \(f(\cdot;\theta_{1})\) and \(f(\cdot;\theta_{2})\) defined in Example 1 with \(\sigma\) being a general activation function. Let the dimension of the hidden layer of \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) be denoted by \(m\). If there exists an \(m\times m\) permutation matrix \(P\) such that_
\[(PW_{2}^{(1)},Pb_{2}^{(1)})=(W_{1}^{(1)},b_{1}^{(1)})\qquad\mathrm{and}\qquad W _{2}^{(2)}P=W_{1}^{(2)},\]
_then the neural networks \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) are functionally equivalent._
The feedforward neural networks are built of linear transformations and activations, and it is intuitive that simply re-indexing neurons in a hidden layer and the corresponding rows of the weights matrix and bias vector will lead to a functionally equivalent neural network. The permutation invariance is the most basic type of equivalence for feedforward neural networks since it does not rely on any specific properties of activation functions, while scaling and sign flipping invariance are activation-dependent properties. A comparison on the functional equivalence properties on neural network with commonly-used activation functions is presented in Table 3.
Next, we derive sufficient conditions for deep feed-forward neural networks (FNNs) to be permutation-equivalent.
**Theorem 1** (Permutation equivalence for deep FNNs).: _Consider two neural networks \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) with the same activations \(\sigma_{1},\ldots,\sigma_{L}\) and architecture_
\[f(x;\theta)=W^{(L+1)}\sigma_{L}(W^{(L)}\cdots\sigma_{1}(W^{(1)}_{1}x+b^{(1)}_{ 1})\cdots)+b^{(L)}_{1})+b^{(L+1)}\]
_but parameterized by different parameters_
\[\theta_{1}=(W^{(1)}_{1},b^{(1)}_{1},\ldots,W^{(L+1)}_{1},b^{(L+1)}_{1}),\qquad \theta_{2}=(W^{(1)}_{2},b^{(1)}_{2},\ldots,W^{(L+1)}_{2},b^{(L+1)}_{2})\]
_respectively, where \(x\in\mathbb{R}^{n}\) is the input to the network. Let \(P^{\top}\) denote the transpose of matrix \(P\). If there exists permutation matrices \(P_{1},\ldots,P_{L}\) such that_
\[W^{(1)}_{1}=P_{1}W^{(1)}_{2}, b^{(1)}_{1}=P_{1}b^{(1)}_{2},\] \[W^{(1)}_{1}=PW^{(1)}_{2}P^{\top}_{l-1}, b^{(l)}_{1}=P_{l}b^{(1)}_{2}, l=2,\ldots,L\] \[W^{(L+1)}_{1}=W^{(L+1)}_{2}P^{\top}_{L}, b^{(L)}_{1}=b^{(L)}_{2},\]
_then the neural networks \(f(x;\theta_{1})\) and \(f(x;\theta_{2})\) are functionally equivalent._
Theorem 1 describes the relationship between the parameters of two permutation-equivalent deep feedforward neural networks. This relationship can be used to create functionally equivalent networks with fixed architectures. It's important to note that although permutation invariance is sufficient for functional equivalence of some feedforward neural networks with general activation functions, it's not always necessary. To fully characterize the necessary and sufficient conditions for functional equivalence, certain restrictions on the network's architecture or activation function are required (Kurkov'a and Kainen, 1994; Bui Thi Mai and Lampert, 2020). However, this study focuses only on utilizing permutation invariance to investigate neural network complexity.
## 3 Complexity of Feedforward Neural Networks
In this section, we analyze the complexity of a class of feedforward neural networks by examining the redundancy that arises from permutation invariance. Specifically, we study the covering number of real-valued, deep feedforward neural networks that share the same architecture but have different parameterization.
Let the vector \((d_{0},d_{1},\ldots,d_{L})\) represent the dimensions of the layers of the neural network \(f(x;\theta)\) defined in (1), where \(d_{L+1}=1\) as the output is real-valued. Note that the bias vectors in hidden layers contain \(\mathcal{U}:=\sum_{i=1}^{L}d_{i}\) entries, and the weight matrices together with bias vectors contain \(\mathcal{S}:=\sum_{i=0}^{L}d_{i}\times d_{i+1}+d_{i+1}\) entries in total. We define the parameter space of \(\theta\) as \(\Theta=[-B,B]^{S}\) for some \(B\geq 1\), which is closed for permutation operations and ensures the absolute value of the weight matrix and bias vector entries are bounded by \(B\). We set \(\Theta=[-B,B]^{S}\) for some \(B\geq 1\). The setting is in line with complexity studies as in (Neyshabur et al., 2015; Bartlett et al., 2017; Golowich et al., 2018) with norm controls. It also takes into account the implicit regularization phenomenon in optimization (Gunasekar et al., 2018, 2018). We do not specify any activation functions \(\sigma_{1},\ldots,\sigma_{L}\) since we consider general deep feedforward neural networks. Finally, the class of feedforward neural networks we consider is denoted as
\[\mathcal{F}(L,d_{0},d_{1},\ldots,d_{L},B)=\{f(\cdot;\theta):\mathbb{R}^{d_{0} }\rightarrow\mathbb{R}\text{ is defined in }(1):\theta\in[-B,B]^{S}\}. \tag{3}\]
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Activations & Formula & Sign flipping & Scaling & Permutation \\ \hline Sigmoid & \([1+\exp(-x)]^{-1}\) & ✗ & ✗ & ✓ \\ Tanh & \([1-\exp(-2x)]/[1+\exp(-2x)]\) & ✓ & ✗ & ✓ \\ ReLU & max\{0,\(x\}\) & ✗ & ✓ & ✓ \\ Leaky ReLU & max\{ax,x\} for \(a>0\) & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Functional equivalence property for networks with different activation functions.
### Shallow Feed-Forward Neural Networks
It is well-known that a shallow neural network with a single hidden layer has universal approximation properties (Cybenko, 1989; Hornik, 1991) and is sufficient for many learning tasks (Hanin and Rolnick, 2019; Hertrich et al., 2021). We begin our investigation by considering shallow neural networks of the form
\[\mathcal{F}(1,d_{0},d_{1},B)=\{f(x;\theta)=W^{(2)}\sigma_{1}(W^{(1)}x+b^{(1)})+ b^{(2)}:\theta\in[-B,B]^{\mathcal{S}}\} \tag{4}\]
where the total number of parameters is given by \(\mathcal{S}=(d_{0}+2)\times d_{1}+1\). By Theorem 1, for any \(\theta=(W^{(1)},b^{(1)},W^{(2)},b^{(2)})\in\Theta\) and any permutation matrix \(P\), \(\hat{\theta}=(PW^{(1)},Pb^{(1)},W^{(2)}P^{\top},b^{(2)})\in\Theta\) will produce the same input-out function. Actually, permutation invariance leads to equivalence classes of parameters that yield the same realization, and we can obtain a set of representatives from equivalence classes. A canonical choice can be
\[\Theta_{0}:=\{\theta\in[-B,B]^{\mathcal{S}}:b_{1}^{(1)}\geq b_{2}^{(1)}\geq \cdots\geq b_{d_{1}}^{(1)}\},\]
where the set of representatives is by restricting the bias vector \(b^{(1)}=(b_{1}^{(1)},\ldots,b_{d_{1}}^{(1)})^{\top}\) to have descending components. Alternatively, we can sort the first component of the rows of \(W^{(1)}\) to obtain a set of representatives. The set of representatives \(\Theta_{0}\) has two important properties
* The neural networks \(\{f(\cdot;\theta):\theta\in\Theta_{0}\}\) parameterized by \(\Theta_{0}\) contains all the functions in \(\{f(\cdot;\theta):\theta\in\Theta\}\), which we denote as \[\{f(\cdot;\theta):\theta\in\Theta_{0}\}=\{f(\cdot;\theta):\theta\in\Theta\}.\]
* The volume of the set of representatives \(\Theta_{0}\) is \((1/d_{1}!)\) times smaller that that of the parameter space \(\Theta\), which we express as \[\mathrm{Volume}(\Theta)=d_{1}!\times\mathrm{Volume}(\Theta_{0}).\]
These properties suggest that \(\Theta_{0}\) can be a light and effective parameterization of neural networks when they are viewed only as input-output functions. In light of these observations, we derive improved complexities of the class of neural networks in terms of its covering number.
**Definition 2** (Covering Number).: _Let \(\mathcal{F}=f:\mathcal{X}\to\mathbb{R}\) be a class of functions. We define the supremum norm of \(f\in\mathcal{F}\) as \(\|f\|_{\infty}:=\sup_{x\in\mathcal{X}}|f(x)|\). For a given \(\epsilon>0\), we define the covering number of \(\mathcal{F}\) with radius \(\epsilon\) under the norm \(\|\cdot\|_{\infty}\) as the least cardinality of a subset \(\mathcal{G}\subseteq\mathcal{F}\) satisfying_
\[\sup_{f\in\mathcal{F}}\min_{g\in\mathcal{G}}\|f-g\|_{\infty}\leq\epsilon.\]
_Denoted by \(\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty})\), the covering number measures the minimum number of functions in \(\mathcal{F}\) needed to cover the set of functions within a distance of \(\epsilon\) under the supremum norm._
The covering number \(\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty})\) provides a quantitative measure of the complexity of the class of functions \(\mathcal{F}\) under the supremum norm, with smaller values indicating simpler classes. Covering numbers, along with Rademacher complexity, VC dimension, and Pseudo dimension, are essential complexity measures in the analysis of learning theories and in estimating generalization errors. Although these measures are different, they are correlated with each other, and we introduce the detailed correlations in Appendix.
**Remark 1**.: _We define the covering number of a class of functions in the uniform sense. This is an extension of the canonical definition of covering numbers, which was originally developed for subsets in Euclidean space. While most existing studies of covering numbers for function spaces consider the image of the functions on a finite sample (Anthony et al., 1999; Bartlett et al., 2017), our definition is formulated directly in terms of the function space itself, without requiring a finite sample or any other auxiliary construction._
**Theorem 2** (Covering number of shallow neural networks).: _Consider the class of single hidden layer neural networks \(\mathcal{F}:=\mathcal{F}(1,d_{0},d_{1},B)\) defined in (4) parameterized by \(\theta\in\Theta=[-B,B]^{\mathcal{S}}\) where \(\mathcal{S}\) is the total number of parameters in the network. Suppose the radius of the domain \(\mathcal{X}\) of \(f\in\mathcal{F}\) is bounded by some \(B_{x}>0\), and the activation \(\sigma_{1}\) is continuous. Then for any \(\epsilon>0\), the covering number_
\[\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty})\leq(16B^{2}(B_{x}+1) \sqrt{d_{0}}d_{1}/\epsilon)^{d_{0}d_{1}+2d_{1}+1}\times\rho^{d_{0}d_{1}+d_{1} }/d_{1}!,\]
_or simply_
\[\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty})\leq(16B^{2}(B_{x}+1) \sqrt{d_{0}}d_{1}/\epsilon)^{\mathcal{S}}\times\rho^{\mathcal{S}_{h}}/d_{1}!, \tag{5}\]
_where \(\rho\) denotes the Lipschitz constant of \(\sigma_{1}\) on the range of the hidden layer (i.e., \([-\sqrt{d_{0}}B(B_{x})+1),\sqrt{d_{0}}B(B_{x}+1)]\)), and \(\mathcal{S}_{h}=d_{0}d_{1}+d_{1}\) is the total number of parameters in the linear transformation form input to the hidden layer, and \(\mathcal{S}=d_{0}\times d_{1}+2d_{1}+1\)._
Our upper bound on the covering number takes advantage of permutation invariance and includes a factorial term \(d_{1}!\) in the denominator, resulting in reduced complexity compared to existing studies (Neyshabur et al., 2015b; Bartlett et al., 2017; Neyshabur et al., 2017b; Neyshabur, 2017; Lin and Zhang, 2019). Stirling's formula can be used to approximate the factorial term as \(\sqrt{2\pi d_{1}}(d_{1}/e)^{d_{1}}\exp(1/(12d_{1}+1))<d_{1}!<\sqrt{2\pi d_{1}} (d_{1}/e)^{d_{1}}\exp(1/12d_{1})\) when \(d_{1}\geq 1\). This reduces the covering number approximately by a factor of \((d_{1}/e)^{d_{1}}\). Notably, the bound in (5) reduces to \((C_{B,B_{x},d_{0},\rho}/\epsilon)^{\mathcal{S}}\times d_{1}^{\mathcal{S}- \mathcal{U}}\), where \(\mathcal{U}=d_{1}\) denotes the number of hidden neurons and \(C_{B,B_{x},d_{0},\rho}>0\) is a constant depending only on \(B,B_{x},d_{0}\) and \(\rho\). Surprisingly, \(\mathcal{S}-\mathcal{U}\) is the number of weights, and the reduced bound is basically that for a single neural network without bias vectors and considering permutation invariance. Bias vectors are "for free" in the sense that including them does not increase the covering number if we take into account permutation invariance. It's worth noting that our bounds allow for bias vectors, which is an improvement over previous studies that didn't consider bias vectors in neural networks (Neyshabur et al., 2015b; Bartlett et al., 2017; Neyshabur et al., 2017b; Neyshabur, 2017; Lin and Zhang, 2019), since bias terms are crucial for the approximation power of neural networks (Yarosky, 2017; Lu et al., 2021b; Shen et al., 2022). Lastly, we note that increasing the number of neurons in a shallow neural network enlarges its approximation power (Lu et al., 2017; Ongie et al., 2019), but at a smaller increase in complexity according to our results
**Remark 2**.: _Theorem 2 applies to any activation function that is continuous and Lipschitz on bounded sets, and does not require any specific choice of activation function, such as Hinge or ReLU as in Neyshabur et al. (2015b, 2017b), or to be universal Lipschitz and \(\sigma(0)=0\) as in Bartlett et al. (2017); Lin and Zhang (2019). Instead, we only require the activation function to be continuous, which implies that it is Lipschitz on bounded sets (range of the hidden layer), which is a technical condition. In the case of the ReLU or Leaky ReLU activation, our bound simplifies to \(\rho=1\) without any condition, leading to the disappearance of the term \(\rho^{\mathcal{S}_{h}}\) in our bound._
### Deep Feed-Forward Neural Networks
For deep neural networks, we can also analyze its effective parameter space based on permutation invariance properties. By Theorem 1, a set of representatives \(\Theta_{0}=\Theta_{0}^{(1)}\times\Theta_{0}^{(2)}\times\cdots\times\Theta_{0}^ {(L)}\times\Theta_{0}^{(L+1)}\) can be constructed where
\[\Theta_{0}^{(l)}=\{(W^{(l)},b^{(l)}) \in[-B,B]^{\mathcal{S}_{l}}:b_{1}^{(l)}\geq b_{2}^{(l)}\geq\cdots \geq b_{d_{l}}^{(l)}\}\text{ for }l=1,\ldots,L,\] \[\Theta_{0}^{(L+1)}=\{(W^{(L+1)},b^{(L+1)})\in[-B,B]^{\mathcal{S}_{ L+1}}\}.\]
Then we can obtain an upper bound of the covering number of deep feedforward neural networks.
**Theorem 3** (Covering number of deep neural networks).: _Consider the class of deep neural networks \(\mathcal{F}:=\mathcal{F}(1,d_{0},d_{1},\ldots,d_{L},B)\) defined in (3) parameterized by \(\theta\in\Theta=[-B,B]^{\mathcal{S}}\) where \(\mathcal{S}\) is the total number of parameters in the neural network. Suppose the radius of the domain \(\mathcal{X}\) of \(f\in\mathcal{F}\) is bounded by \(B_{x}\) for some \(B_{x}>0\), and the activations \(\sigma_{1},\ldots,\sigma_{L}\) are continuous. Then for any \(\epsilon>0\), the covering number_
\[\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty})\leq\frac{ \left(4(L+1)(B_{x}+1)(2B)^{L+2}(\Pi_{j=1}^{L}\rho_{j})(\Pi_{j=0}^{L}d_{j}) \cdot\epsilon^{-1}\right)^{\mathcal{S}}}{d_{1}!\times d_{2}!\times\cdots \times d_{L}!}, \tag{6}\]
_where \(\mathcal{S}=\sum_{i=0}^{L}d_{i}d_{i+1}+d_{i+1}\) and \(\rho_{i}\) denotes the Lipschitz constant of \(\sigma_{i}\) on the range of \((i-1)\)-th hidden layer, especially the range of \((i-1)\)-th hidden layer is bounded by \([-B^{(i)},B^{(i)}]\) with \(B^{(i)}\leq(2B)^{i}\Pi_{j=1}^{i-1}\rho_{j}d_{j}\) for \(i=1,\ldots,L\)._
Theorem 3 provides a novel upper bound for the covering number of deep neural networks based on permutation invariance, which reduces the complexity compared to previous results (Neyshabur et al., 2015, 2017; Bartlett et al., 2017; Lin and Zhang, 2019) by approximately a factor of \((d_{1}!d_{2}!\cdots d_{L}!)\). According to Theorem 3, increasing the depth of a neural network does increase its complexity. However, it is interesting to note that the increased hidden layer \(l\) will have a \((d_{l}!)\) discount on the complexity. If the hidden layers have equal width (\(d=d_{1}=\cdots=d_{L}\)), the bound in (6) reduces to \((C_{B,B_{x},d_{0},\rho}/\epsilon)^{\mathcal{S}}\times d^{\mathcal{S}-\mathcal{ U}}\), where \(\mathcal{U}=Ld\) denotes the number of hidden neurons and \(C_{B,B_{x},d_{0},\rho}>0\) is a constant depending only on \(B,B_{x},d_{0}\) and \(\rho_{i},i=1,\ldots,L\). As with shallow neural networks, bias vectors here are also "for free" when taking into account permutation invariance.
**Remark 3**.: _As discussed in Remark 2, our results take into account the permutation invariance and have looser requirements on activation functions compared to existing results (Neyshabur et al., 2015, 2017; Bartlett et al., 2017; Lin and Zhang, 2019). In addition, our upper bound is explicitly expressed in parameters which are known and can be specified in practice, e.g., network depth \(L\), width \((d_{0},d_{1},\ldots,d_{L})\), size \(\mathcal{S}\) and uniform bound \(B\) for weights and biases. While most existing bounds in (Neyshabur et al., 2015, 2017; Bartlett et al., 2017; Lin and Zhang, 2019) are in terms of the spectral norm of weight matrices and some measurement on externally introduced reference matrices, which are usually unknown in practice._
### Comparing to existing results
The complexity upper bounds in terms of covering number on the class of deep neural networks have been studied in Anthony et al. (1999); Neyshabur et al. (2017); Bartlett et al. (2017); Lin and Zhang (2019). These results are proved by using similar approaches of mathematical induction (e.g. Lemma A.7 in Bartlett et al. (2017), Lemma 2 in Neyshabur et al. (2017) and Lemma 14 of Lin and Zhang (2019)). Compared with these results, we improve upon their results in three ways.
* First, we consider the generally defined neural networks where the bias vector is allowed to appear. Bias terms are indispensable for the approximation power of neural networks (Yarotsky, 2017; Lu et al., 2021; Shen et al., 2022). Neural networks without bias vectors may be disqualified in theory and practice.
* Second, we express our bound in terms of the width, depth, and size of the neural network, as well as the infinity norm of the parameters, instead of the spectral norm of the weight matrices, which is usually unknown in practice.
* Third, we make use of the permutation equivalence property to obtain a tighter upper bound on the covering number. We found that bias vectors are "for free" in the sense that the increased complexity that comes with adding bias vectors to neural networks is canceled by taking permutation invariance into account.
Various complexity upper bounds for deep neural networks in other measurements have also been studied, including Rademacher complexity (Neyshabur et al., 2015; Golowich et al., 2018; Li et al., 2018), VC-dimension, and Pseudo dimension (Baum and Haussler, 1988; Goldberg and Jerrum, 1993; Anthony et al., 1999; Bartlett et al., 2019). Our bounds in terms of covering number are not directly comparable with these measurements. To make a comparison with these other measurements, we convert the upper bounds into metric entropy, which is equivalent to the logarithm of the covering number \(\log(\mathcal{N}(\mathcal{F},\epsilon,\|\cdot\|_{\infty}))\). Specially, let \(\bar{\rho}=\Pi_{j=1}^{L}\rho_{j}\) products of Lipschitz constants of activation functions and \(\bar{s}=\Pi_{j=1}^{L}s_{j}\) products of spectral norms of hidden layer weight matrices Bartlett et al. (2017) derived an spectral-norm based bound of metric entropy \(B_{x}^{2}(\bar{\rho}\bar{s})^{2}\mathcal{U}\log(W)/\epsilon^{2}\), following which Neyshabur et al. (2017) obtained \(B_{x}^{2}(\bar{\rho}\bar{s})^{2}\mathcal{S}L^{2}\log(WL)/\epsilon^{2}\) and Lin and Zhang (2019) obtained \(B_{x}(\bar{\rho}\bar{s})\mathcal{S}^{2}L/\epsilon\). Based on Theorem 12.2 in Anthony et al. (1999), the Pseudo dimension bound in Bartlett et al. (2017) leads to \(L\mathcal{S}\log(\mathcal{S})\log(B_{x}/\epsilon)\). Lastly, our results in Theorem 3 can convert to \(L\mathcal{S}\log(\bar{\rho}\bar{s}B_{x}^{1/L}/(d_{1}!\cdots d_{L}!\epsilon)^{1/L})\) by \(s_{i}=B\sqrt{d_{i}d_{i-1}}\) given in our setting. We present a detailed comparison of these results in Table 2.
Applications to generalization and optimization
In this section, we introduce the empirical risk minimization (ERM) framework to highlight the relevance of our study to both generalization and optimization.
The goal of ERM is to find the target function \(f_{0}\), which represents the true relationship between the inputs and outputs. The target \(\bar{f}_{0}\) which represents the true underlying relationship between the inputs and outputs, which is typically defined as the minimizer of the risk \(\mathcal{R}(\cdot)\), i.e.,
\[f_{0}:=\arg\min_{f}\mathcal{R}(f).\]
However, since the target function is unknown, we can only approximate it using a predefined hypothesis space \(\mathcal{F}\), such as the class of neural networks parameterized by \(\theta\) in deep learning, i.e., \(\mathcal{F}=\mathcal{F}\Theta=f\theta(\cdot)=f(\cdot;\theta):\theta\in\Theta\). Then the "best in class" estimator is then defined by
\[f_{\theta^{*}}=\arg\min_{f\in\mathcal{F}_{\Theta}}\mathcal{R}(f).\]
It's worth noting that the risk function \(\mathcal{R}\) is defined with respect to the distribution of the data, which is unknown in practice. Instead, only a sample with size \(n\) is available, and the empirical risk \(\mathcal{R}n\) can be defined and minimized to obtain an empirical risk minimizer, i.e.,
\[f_{\theta_{n}}\in\arg\min_{f\in\mathcal{F}_{\Theta}}\mathcal{R}_{n}(f).\]
Finally, optimization algorithms such as SGD and Adam lead us to the estimator obtained in practice, i.e., \(f_{\hat{\theta}_{n}}\). The generalization error of \(f_{\hat{\theta}_{n}}\) is defined as \(\mathcal{R}(f_{\hat{\theta}_{n}})-\mathcal{R}(f_{0})\), which can be decomposed (Mohri et al., 2018) as follows:
\[\underbrace{\mathcal{R}(f_{\hat{\theta}_{n,opt}})-\mathcal{R}(f_{0})}_{\text {generalization error}}=\underbrace{\mathcal{R}(f_{\hat{\theta}_{n}})-\mathcal{R}(f_ {\theta_{n}})}_{\text{optimization error}}+\underbrace{\mathcal{R}(f_{\theta_{n}})- \mathcal{R}(f_{\theta^{*}})}_{\text{estimation error}}+\underbrace{\mathcal{R}(f_ {\theta^{*}})-\mathcal{R}(f_{0})}_{\text{approximation error}}.\]
The estimation error is closely related to the complexity of the function class \(\mathcal{F}\Theta\) and the sample size \(n\). For a wide range of problems, the estimation error is of the order \(\mathcal{O}((\log\mathcal{N}(\mathcal{F}\Theta),1/n,\|\cdot\|_{\infty})/n)^{1/2}\)(Bartlett et al., 2019; Kohler and Langer, 2021). The approximation error generally depends on the expressive power of neural networks \(\mathcal{F}_{\Theta}\) and the features of the target function \(f_{0}\), such as its input dimension \(d\) and smoothness \(\beta\). Typical results for the bounds of the approximation error are of the order \(\mathcal{O}((L\mathcal{W})^{-\beta/d})\)(Yarotsky, 2017, 2018; Petersen and Voigtlaender, 2018; Lu et al., 2021b). Due to the high non-convexity and complexity of deep learning problems, the optimization error is far from clear based on optimization theories. Even proving convergence (to stationary points) of existing methods is a difficult task (Sun, 2020).
In this paper, we investigate the functional equivalence, particularly the permutation-invariance property of the parameterization of feedforward neural networks, and demonstrate that this can lead to a reduction in estimation error and a better understanding of optimization error, see Table 1. In section 3, we derived a reduced complexity for feedforward neural networks, which can help to lower the theoretical bounds of the generalization error of the empirical risk minimizer \(f_{\theta_{n}}\). The improvement is due to the consideration of the permutation invariance property when bounding the complexity and estimation error. We also found that the symmetric structure of the parameter space can make optimization easier. For a feedforward neural network in (3), we denote the weight matrix and bias vector in the \(l\)th hidden layer as \(\theta^{(l)}:=(W^{(l)},b^{(l)})\) for \(l=1,\dots,L\). We say two rows in \(\theta^{(l)}\) are identical if the two rows of the concatenated matrix \((W^{(l)};b^{(l)})\) are identical, and we let \(d^{*}_{l}\) denote the number of distinct permutations of rows in \(\theta^{(l)}\). We use a vector \((d^{*}_{1},\dots,d^{*}_{L})\) to collect the number of distinct permutations in the hidden layers of the network parameterization \(\theta=(\theta^{(1)},\dots,\theta^{(L)})\). Let \(\Delta_{min}(\theta)\) denote the minimum of the \(L_{\infty}\) norm of the difference of distinct rows in \(\theta^{(l)}\) over \(l\in{1,\dots,L}\). Let \(\|\theta\|_{\infty}\) denote the maximum absolute value of entries in weight matrices and bias vectors in \(\theta\). Then we have the following result.
**Theorem 4**.: _Suppose we have an empirical risk minimizer \(f_{\theta_{n}}(\cdot)=f(\cdot;\theta_{n})\) with parameter \(\theta_{n}\) having \((d^{*}_{1},\dots,d^{*}_{L})\) distinct permutations and \(\Delta_{min}(\theta_{n})=\delta\). For any optimization algorithm \(\mathcal{A}\), if it guarantees producing a convergent solution of \(\theta_{n}\) when its initialization \(\theta^{(0)}_{n}\) satisfies \(\|\theta^{(0)}_{n}-\theta_{n}\|_{\infty}\leq\delta/2\), then any initialization scheme that uses identical random distributions for the entries of weights and biases within a layer will produce a convergent solution with probability at least \(d^{*}_{1}\times\dots\times d^{*}_{L}\times\mathbb{P}(\|\theta^{(0)}-\theta_{n}\| _{\infty}\leq\delta/2)\). Here, \(\theta^{(0)}\) denotes the random initialization, and \(\mathbb{P}(\cdot)\) denotes the probability with respect to the randomness from initialization._
Theorem 4 can be understood straightforwardly. By Theorem 1, if \(f_{\theta_{n}}\) parameterized by \(\theta_{n}\) is a solution, then \(f_{\tilde{\theta}_{n}}\) is also a solution, where \(\tilde{\theta}_{n}\) is the permutation-implemented version of \(\theta_{n}\). The conditions regarding \(\delta\) in Theorem 1 ensure that the convergent regions for permutation-implemented solutions are disjoint. Initialization schemes that use identical random distributions for the weights and biases within a layer will preserve the distribution of the random initialization \(\theta^{(0)}\), regardless of its permutations. Therefore, if the random initialization \(\theta^{(0)}\) is in the vicinity of any \(\tilde{\theta}_{n}\), then it is sufficient for the algorithm \(\mathcal{A}\) to converge. Thus, the final probability of \(\mathcal{A}\) producing a convergent solution is the probability of the random initialization \(\theta^{(0)}\) being in the union of these disjoint convergence regions.
It is worth noting that when the parameter space is \(\Theta=[-B,B]^{\mathcal{S}}\), based on Theorem 1, the optimization problem is equivalent when restricted to the effective parameter space \(\Theta_{0}\), as defined in section 3.2. The volume of \(\Theta_{0}\) is \((2B)^{\mathcal{S}}/(d_{1}!\cdots d_{L}!)\). Specifically, when \(B\) is fixed, \((2B)^{\mathcal{S}}/(d_{1}!\cdots d_{L}!)\) approaches zero when \(d_{l}\to\infty\) for any \(l=1,\ldots,L\). Remarkably, the observation is counter-intuitive in that increasing the width of neural networks leads to the effective parameter space's volume tending towards zero. As a result, this may explain the observations in Frankle and Carbin (2019); Allen-Zhu et al. (2019); Du et al. (2019) where overparameterized networks tend to be easier to train.
**Remark 4**.: _The popular initialization schemes, including the Xavier and He initialization methods, use normal random numbers to initialize the entries of weight matrices and bias vectors identically within a layer (Glorot and Bengio, 2010; He et al., 2015; Shang et al., 2016; Reddi et al., 2019). By Theorem 4, these initialization schemes reduce the optimization difficulty due to the permutation invariance property._
## 5 Conclusion
In this work, we quantitatively characterized the redundancy in the parameterization of feedforward neural networks and derived a tighter complexity upper bound in terms of the covering number for these networks based on functional equivalence. We found that bias vectors are "for free" in the sense that the increased complexity that comes with adding bias vectors to neural networks is canceled by taking permutation invariance into account. Our improved covering number bound is explicit and holds for neural networks with bias vectors and general activation functions. We discussed the implications of our findings for understanding generalization and optimization. Specifically, we found that using the same strategy of random initialization for weights and biases can reduce the theoretical complexity of optimizations in deep learning. We also observe that increasing the width of neural networks counter-intuitively leads to the effective parameter space's volume tending towards zero, and as a result, the optimization problem seems to become easier.
An important limitation of our work is that we only considered permutation invariance in deriving our complexity bounds for fully-connected feedforward neural networks. It would be interesting to take scale invariance into account and extend the results to other neural networks, such as convolutional neural networks with sparse weight matrices. Additionally, we only considered exactly equivalent functions, while in practice, functional equivalence may be limited to a finite sample, which could lead to further reduced complexity. Lastly, our results only partially explain the benefits of the symmetric structure of neural network parameters for optimization. Further studies may imply advanced optimization algorithms or designs for deep learning. We believe that addressing the above questions carefully can improve both the theoretical bounds and practical algorithms, and we hope to study them in the future.
|
2310.14901 | Series of Hessian-Vector Products for Tractable Saddle-Free Newton
Optimisation of Neural Networks | Despite their popularity in the field of continuous optimisation,
second-order quasi-Newton methods are challenging to apply in machine learning,
as the Hessian matrix is intractably large. This computational burden is
exacerbated by the need to address non-convexity, for instance by modifying the
Hessian's eigenvalues as in Saddle-Free Newton methods. We propose an
optimisation algorithm which addresses both of these concerns - to our
knowledge, the first efficiently-scalable optimisation algorithm to
asymptotically use the exact inverse Hessian with absolute-value eigenvalues.
Our method frames the problem as a series which principally square-roots and
inverts the squared Hessian, then uses it to precondition a gradient vector,
all without explicitly computing or eigendecomposing the Hessian. A truncation
of this infinite series provides a new optimisation algorithm which is scalable
and comparable to other first- and second-order optimisation methods in both
runtime and optimisation performance. We demonstrate this in a variety of
settings, including a ResNet-18 trained on CIFAR-10. | Elre T. Oldewage, Ross M. Clarke, José Miguel Hernández-Lobato | 2023-10-23T13:11:30Z | http://arxiv.org/abs/2310.14901v2 | # Series of Hessian-Vector Products for Tractable Saddle-Free Newton Optimisation of Neural Networks
###### Abstract
Despite their popularity in the field of continuous optimisation, second-order quasi-Newton methods are challenging to apply in machine learning, as the Hessian matrix is intractably large. This computational burden is exacerbated by the need to address non-convexity, for instance by modifying the Hessian's eigenvalues as in Saddle-Free Newton methods. We propose an optimisation algorithm which addresses both of these concerns -- to our knowledge, the first efficiently-scalable optimisation algorithm to asymptotically use the exact (eigenvalue-modified) inverse Hessian. Our method frames the problem as a series which principally square-roots and inverts the squared Hessian, then uses it to precondition a gradient vector, all without explicitly computing or eigendecomposing the Hessian. A truncation of this infinite series provides a new optimisation algorithm which is scalable and comparable to other first- and second-order optimisation methods in both runtime and optimisation performance. We demonstrate this in a variety of settings, including a ResNet-18 trained on CIFAR-10.
## 1 Introduction
At the heart of many machine learning systems is an optimisation problem over some loss surface. In the field of continuous optimisation, second-order Newton methods are often preferred for their rapid convergence and curvature-aware updates. However, their implicit assumption of a (locally) convex space restricts their usability, requiring the use of mechanisms like damping (Martens, 2010; Dauphin et al., 2014; O'Leary-Roseberry et al., 2021) to avoid degenerate behaviour. In machine learning applications, which are invariably non-convex, high dimensionality further plagues this class of optimiser by creating intractably large Hessian (second-derivative) matrices and a proliferation of saddle points in the search space (Pascanu et al., 2014). These difficulties constrain most practical systems to first-order optimisation methods, such as stochastic gradient descent (SGD) and Adam.
Pascanu et al. (2014) and Dauphin et al. (2014) tackled some of these challenges by proposing Saddle-Free Newton (SFN) methods. In essence, they transform the Hessian by taking absolute values of each eigenvalue, which makes non-degenerate saddle points repel second-order optimisers where typically they would be attractive. Because this transformation would otherwise require an intractable eigendecomposition of the Hessian, they work with a low-rank Hessian approximation, on which this process is achievable, albeit at the cost of introducing an additional source of error.
In this paper, we propose a new route towards SFN optimisation which exploits Hessian-vector products to avoid explicitly handling the Hessian. We use a squaring and square-rooting procedure to take the absolute
value of the eigenvalues without eigendecomposing the Hessian and deploy an infinite series to tractably approximate the expensive square-root and inverse operations. The resulting algorithm is comparable to existing methods in both runtime and optimisation performance, while tractably scaling to larger problems, even though it does not consistently outperform the widely-known Adam (Kingma and Ba, 2015) and KFAC (Martens and Grosse, 2015). To our knowledge, this is the first approximate second-order approach to (implicitly) edit the full Hessian matrix's eigenvalues and be exact in its untruncated form. After summarising previous work in Section 2, we mathematically justify the asymptotic exactness of our algorithm in Section 3 and show its practical use in a range of applications in Section 4. Section 5 concludes the paper.
## 2 Related Work
Although stochastic first-order optimisation methods are the bread and butter of deep learning optimisation, considerable effort has been dedicated to _preconditioned_ gradient methods - methods that compute a matrix which scales the gradient before performing an update step. Newton's method and quasi-Newton methods, which multiply the gradient by the Hessian or an approximation thereof, fall into this category. Other examples include AdaGrad (Duchi et al., 2011) which calculates a preconditioner using the outer product of accumulated gradients, and SHAMPOO (Gupta et al., 2018) which is similar to Adagrad but maintains a separate, full preconditioner matrix for each dimension of the gradient tensor.
Martens (2010) proposes using _Hessian-Free_ (HF) or truncated Newton (Nocedal and Wright, 2006) optimisation for deep learning. The algorithm uses finite differences to approximate the Hessian in combination with the linear conjugate gradient algorithm (CG) to compute the search direction. Like our method, HF implicitly works with the full Hessian matrix and is exact when CG converges.
Pascanu et al. (2014) and Dauphin et al. (2014) present the proliferation of saddle points in high-dimensional optimisation spaces as an explanation for poor convergence of first-order optimisation methods. Various approaches to escaping these saddle points have been proposed. Jin et al. (2017) observe that saddle points are easy to escape by adding noise to the gradient step when near a saddle point, as indicated by a small gradient. Another idea is to normalise the gradient so that progress is not inhibited near critical points due to diminishing gradients (Levy, 2016; Murray et al., 2019).
Saddle points also present a hurdle to second order optimisation, since they become attractive when applying Newton's method. Nevertheless, some work leverages second order information in sophisticated ways to avoid saddle points. For example, Curtis and Robinson (2019) exploit negative curvature information by alternating between classical gradient descent steps and steps in the most extreme direction of negative curvature. Adolphs (2018) builds on this to propose "extreme curvature exploitation", where the eigenvectors corresponding to the most extreme positive and negative eigenvalues are added to the vanilla gradient update step. Anandkumar and Ge (2016) develop an algorithm which finds stationary points with first, second _and third_ derivatives equal to zero, and show that progressing to a fourth-order optimality condition is NP-hard. Truong et al. (2021) project the Newton update step onto subspaces constructed using the positive- and negative-curvature components of the Hessian, allowing them to negate the updates proposed by the latter.
Pascanu et al. (2014) propose the Nonconvex Newton Method, which constructs a preconditioner by decomposing the Hessian and altering it so that all eigenvalues are replaced with their absolute values and very small eigenvalues are replaced by a constant. Unfortunately, explicit decomposition of the Hessian is expensive and does not scale well to machine learning applications. Dauphin et al. (2014) extend this work by proposing the Saddle-Free Newton (SFN) method, which avoids computing and decomposing the exact Hessian by an approach similar to Krylov subspace descent (Vinyals and Povey, 2012), which finds \(k\) vectors spanning the \(k\) most dominant eigenvectors of the Hessian. However, this approach relies on the Lanczos algorithm, which is known to be unstable (Cahill et al., 2000; Scott, 1979). O'Leary-Roseberry et al. (2021) instead invert a low-rank approximation to the Hessian for improved stability. However, their method is susceptible to poor conditioning at initalisation and is limited to very small step sizes in settings with high stochasticity. Consequently, it is unclear how well the algorithm extends beyond the transfer learning settings illustrated.
Instead, our work writes the inverse of the squared and principal square-rooted Hessian as a series, of which we can compute a truncation without explicitly computing or eigendecomposing the Hessian, thereby avoiding instabilities faced by Dauphin et al. (2014) and O'Leary-Roseberry et al. (2021).
There are other examples in machine learning where infinite series are used to motivate approximations to the inverse Hessian (Lorraine et al., 2020; Clarke et al., 2022); we exploit the same construction as Song et al. (2021) to compute the square root of a matrix.
An alternative approach is to precondition the gradient with a curvature matrix that is positive semi-definite by definition, thereby circumventing concerns surrounding saddle points. Notably, the natural gradient method (Amari, 1998) preconditions the gradient with the inverse Fisher information matrix, rather than the inverse Hessian. Whereas the Hessian measures curvature in the model parameters, the Fisher quantifies curvature in terms of the KL-divergence between model and data probability distributions. The natural gradient can be approximated by methods like Factorized Natural Gradient (Grosse and Salakhudinov, 2015) and Kronecker-Factored Approximate Curvature (KFAC) (Martens and Grosse, 2015). In particular, KFAC approximates the Fisher with a block diagonal matrix, which significantly reduces the memory footprint and reduces the cost of inversion. KFAC also leverages several other "tricks", which are relevant for later discussion. We provide a brief overview below and further details in Appendix A.4:
**Moving average of curvature matrix**: KFAC maintains an online, exponentially-decaying average of the approximate curvature matrix, which improves its approximation thereof and makes the method more robust to stochasticity in mini-batches.
**Adaptive learning rate and momentum factor**: KFAC's update rule incorporates a learning rate and a momentum factor which are both computed adaptively by assuming a locally quadratic model and solving for the local model's optimal learning rate and momentum factor at every iteration.
**Tikhonov damping with Levenberg-Marquardt style adaptation.**: KFAC incorporates two damping terms: \(\eta\) for weight regularisation, and \(\lambda\) which is adapted throughout training using Levenberg-Marquardt style updates (More, 1978). The damping constant \(\lambda\) can be interpreted as defining a trust region for the update step. When the curvature matrix matches the observed landscape, the trust region is grown by shrinking \(\lambda\); otherwise damping is increased so that optimisation becomes more SGD-like.
KFAC is arguably the most popular second-order method enjoying widespread use in practice, so we include it as an important baseline in Section 4. Section 4.1 describes our studies incorporating similar adaptive mechanisms into our method, which we now proceed to derive.
## 3 Derivations
Suppose we wish to minimise some scalar function \(f(\mathbf{x})\) over the vector quantities \(\mathbf{x}\), which have some optimal value \(\mathbf{x}^{*}\). Denote by \(\mathbf{g}=\nabla_{\mathbf{x}}f=\nabla f(\mathbf{x})\) and \(\mathbf{H}=\nabla_{\mathbf{x}}(\nabla_{\mathbf{x}}f)^{\mathsf{T}}\) the gradient vector and Hessian matrix of \(f\), respectively, with both quantities evaluated at the present solution \(\mathbf{x}\). We make no assumptions about the convexity of \(f(\mathbf{x})\).
### Preliminaries
Under a classical Newton framework, we can approximate a stationary point \(\mathbf{x}^{*}\) by writing a second-order Taylor series for perturbations around some \(\mathbf{x}\). Assuming \(\mathbf{H}\) is invertible, this recovers
\[\mathbf{x}^{*}\approx\mathbf{x}-\mathbf{H}^{-1}\mathbf{g}, \tag{1}\]
where the RHS is the Newton update to \(\mathbf{x}\). In effect, we have locally approximated \(f\) about \(\mathbf{x}\) by a quadratic function, then set \(\mathbf{x}\) to the stationary point of this quadratic. The invertibility of \(\mathbf{H}\) guarantees that this stationary point is unique. However, if \(\mathbf{H}\) is not positive definite -- for instance, if the function is locally non-convex -- that stationary point may be a maximum or saddle point of the approximated space, rather than a minimum.
To address this limitation, we might consider the eigendecomposition of \(\mathbf{H}\). Since \(\mathbf{H}\) is real and symmetric for non-degenerate loss functions, its eigenvalues are real and its eigenvectors may be chosen to be orthonormal. We can interpret the eigenvectors as the 'principal directions of convexity', and the eigenvalues as the corresponding magnitudes of convexity in each direction (where negative eigenvalues encode concavity). As \(\mathbf{H}^{-1}\) has equal eigenvectors to \(\mathbf{H}\) and reciprocal eigenvalues, we may interpret the product \(\mathbf{H}^{-1}\mathbf{g}\) as a transformation of the gradient vector, with anisotropic scaling governed by the directions and magnitudes of convexity in \(\mathbf{H}\). Moreover, this product gives _exactly_ the updates necessary to move along each principal direction of convexity to the stationary value in that direction, according to the locally quadratic approximation implied by \(\mathbf{H}\). This is illustrated in Figure 1.
As positive eigenvalues are associated with directions of convex curvature, \(\mathbf{H}^{-1}\) selects updates in these directions which _decrease_ the loss function. Conversely, \(\mathbf{H}^{-1}\) selects updates which _increase_ the loss function in the directions associated with negative eigenvalues -- directly opposing our goal of minimising \(f\). Intuitively, we would like to reverse the direction of the latter updates, such that they are decreasing \(f\). This is equivalent to changing the sign of the corresponding eigenvalues.
This intuitive idea was presented by Pascanu et al. (2014), and Dauphin et al. (2014) establish a more direct derivation using a trust region framework, which motivates taking the absolute value of every eigenvalue in a _Saddle-Free Newton_ (SFN) method (Figure 1). However, its implementation in deep learning is challenged by the intractably large Hessian matrices of non-trivial neural networks. Previous work (Dauphin et al., 2014; O'Leary-Roseberry et al., 2021) tackles this by computing a low-rank approximate Hessian, whose eigendecomposition may be calculated directly, changed as required and approximately inverted. While such an approach secures tractability, the cascade of approximations threatens overall accuracy.
### Absolute Values as Square-Rooted Squares
Our proposed method seeks to transform the eigenvalues of \(\mathbf{H}\) without computing its full eigendecomposition. This approach is inspired by the observation that, for scalar \(x\), \(|x|=+\sqrt{x^{2}}\), where we specifically take the
Figure 1: Motivation for Saddle-Free Newton methods. This locally quadratic surface has a saddle point () and its Hessian gives two principal directions of curvature (—, —). From any initial point (), SGD will give an update neglecting curvature () and Newton’s method converges immediately to the saddle point (). Exact Saddle-Free Newton () takes absolute values of the Hessian eigenvalues, negating the components of the Newton update in concave directions (—) and thus changing the saddle point from an attractor to a repeller. Our series-based method () is an approximate Saddle-Free Newton algorithm which converges to the exact Saddle-Free Newton result.
positive square root. In the matrix case, we may define \(\mathbf{S}\) as a square root of a square matrix \(\mathbf{A}\) iff \(\mathbf{A}=\mathbf{S}\mathbf{S}\). For a square, positive semi-definite \(\mathbf{A}\), there is a unique positive semi-definite square root \(\mathbf{B}\), which we term the _principal_ square root of \(\mathbf{A}\); we will write \(\mathbf{B}=\sqrt[n]{\mathbf{A}}\).
If \(\mathbf{A}\) is real and symmetric, we may eigendecompose it as \(\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\mathsf{T}}\), where \(\mathbf{Q}\) is the orthonormal matrix whose columns are the eigenvectors of \(\mathbf{A}\) and \(\mathbf{\Lambda}\) the diagonal matrix whose elements are the corresponding eigenvalues of \(\mathbf{A}\). Then, we have \(\mathbf{B}=\mathbf{Q}\mathbf{\Lambda}^{\frac{1}{2}}\mathbf{Q}^{\mathsf{T}}\). Since raising the diagonal matrix \(\mathbf{\Lambda}\) to the \(k\)th power is equivalent to raising each diagonal element to the \(k\)th power, \(\mathbf{B}\) has the same eigenvectors as \(\mathbf{A}\), but the eigenvalues of \(\mathbf{B}\) are the square roots of those of \(\mathbf{A}\). By taking the principal square root, we guarantee that all the eigenvalues of \(\mathbf{B}\) are non-negative, hence we have taken the positive square root of each eigenvalue in turn.
This reveals a route to transforming our Hessian \(\mathbf{H}\) by taking the absolute value of its eigenvalues. Consider the eigendecomposition \(\mathbf{H}=\mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{\mathsf{T}}\), noting that \(\mathbf{H}^{2}=\mathbf{Q}\mathbf{\Lambda}^{2}\mathbf{Q}^{\mathsf{T}}\) is positive semi-definite by construction, as its eigenvalues are squares of real numbers. But then \(\sqrt[n]{\mathbf{H}^{2}}\) is the unique positive semi-definite square root of \(\mathbf{H}^{2}\), and each eigenvalue of \(\sqrt[n]{\mathbf{H}^{2}}\) is the positive square root of the square of the corresponding eigenvalue of \(\mathbf{H}\) -- equivalently, its absolute value. Thus, we may take the absolute value of each eigenvalue of \(\mathbf{H}\) by computing the square and then taking the principal square root of \(\mathbf{H}\), i.e. by computing \(\sqrt[n]{\mathbf{H}^{2}}\).
### Inverse Square Root Series
To use this transformed \(\mathbf{H}\) as a second-order preconditioner, we must also invert it, so the matrix of interest is \(\left(\sqrt[n]{\mathbf{H}^{2}}\right)^{-1}\). We now develop a series approximation to this quantity. For scalars \(z\), we may exploit the generalised binomial theorem to write
\[(1-z)^{-\frac{1}{2}}=\sum_{k=0}^{\infty}\frac{1}{2^{2k}}\binom{2k}{k}z^{k} \tag{2}\]
Applying the root test for convergence, a sufficient condition for the convergence of this series is \(\limsup_{n\to\infty}|z^{n}|^{\frac{1}{k}}<1\). We generalise this series to the matrix case by replacing the absolute value \(|\cdot|\) with any compatible sub-multiplicative matrix norm \(\|\cdot\|\) and writing \(\mathbf{I}-\mathbf{Z}\) in place of \(1-z\). Ideally, we would set \(\mathbf{Z}=\mathbf{I}-\mathbf{H}^{2}\) and recover a power series directly, but to ensure convergence we will require a scaling factor \(V\) such that \(\mathbf{Z}=\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\). With this addition, we have
\[(\mathbf{H}^{2})^{-\frac{1}{2}}=\frac{1}{\sqrt{V}}\sum_{k=0}^{\infty}\frac{1} {2^{2k}}\binom{2k}{k}\left(\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\right)^{k}. \tag{3}\]
For this matrix series to converge, we require \(\limsup_{n\to\infty}\|\big{(}\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\big{)}^{n} \|^{\frac{1}{k}}<1\). By Gelfand's formula, this limit superior is simply the spectral radius of \(\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\) which, this being a real symmetric matrix, is exactly the largest of the absolute value of its eigenvalues. Denoting the largest-magnitude eigenvalue of \(\mathbf{H}^{2}\) by \(\lambda_{\max}\), our convergence condition is thus equivalent to \(V>\frac{1}{2}\lambda_{\max}\). Further, if we strengthen the bound to \(V>\lambda_{\max}\), we have that \(\big{(}\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\big{)}^{k}\) is positive semi-definite for \(k=0,1,2,\cdots\), so our series, regardless of where it is truncated, produces a positive semi-definite matrix. We are thus guaranteed to be asymptotically targeting the principal square root. Since \(\big{\|}\mathbf{H}^{2}\big{\|}\geq\lambda_{\max}\) for any sub-multiplicative norm \(\|\cdot\|\), a more practical bound is \(V>\big{\|}\mathbf{H}^{2}\big{\|}\). See Appendix D for further analysis of the correctness, convergence and behaviour around critical points of this series.
### Hessian Products, Choice of \(V\) and Series Acceleration
Although we have avoided directly inverting or square-rooting a Hessian-sized matrix, explicitly computing this series remains intractable. Instead, recall that our quantity of interest for second-order optimisation is
\(\left(\sqrt[4]{\mathbf{H}^{2}}\right)^{-1}\mathbf{g}\), and consider the series obtained by multiplying (3) by \(\mathbf{g}\):
\[(\mathbf{H}^{2})^{-\frac{1}{2}}\mathbf{g}=\frac{1}{\sqrt{V}}\sum_{k=0}^{\infty} \frac{1}{2^{2k}}\binom{2k}{k}\left(\mathbf{I}-\frac{1}{V}\mathbf{H}^{2}\right)^ {k}\mathbf{g}. \tag{4}\]
Denoting by \(\mathbf{a}_{k}\) the \(k\)th term of this summation, we have \(\mathbf{a}_{0}=\frac{1}{\sqrt{V}}\mathbf{g}\) and \(\mathbf{a}_{k}=\frac{2k(2k-1)}{4k^{2}}\left(\mathbf{a}_{k-1}-\frac{1}{V} \mathbf{H}\mathbf{H}\mathbf{a}_{k-1}\right)\). With two applications of the Hessian-vector product trick (Pearlmutter, 1994), we can compute \(\mathbf{H}\mathbf{H}\mathbf{a}_{k-1}\) at the cost of two additional forward and backward passes through the model -- a cost vastly smaller than that of storing, manipulating and inverting the full Hessian. By unrolling this recursion, we can thus efficiently compute the summation of a finite number of the \(\mathbf{a}_{k}\).
Under this framework, we have ready access to the product \(\mathbf{H}^{2}\mathbf{g}\), so can use the loose adaptive heuristic \(V\geq\frac{\left\|\mathbf{H}^{2}\mathbf{g}\right\|}{\|\mathbf{g}\|}\), which we found to be the most performant strategy for adapting \(V\).
In practice, we found (4) to converge slowly, and thus benefit from series acceleration. From a variety of strategies, we found the most successful to be a modification due to Sablonniere (1991) of Wynn's \(\epsilon\)-algorithm (Wynn, 1956a). Letting \(\mathbf{s}_{m}\) be the \(m\)th partial sum of (4), the algorithm defines the following recursion:
\[\boldsymbol{\epsilon}_{m}^{(-1)}=0,\qquad\boldsymbol{\epsilon}_{m}^{(0)}= \mathbf{s}_{m},\qquad\boldsymbol{\epsilon}_{m}^{(c)}=\boldsymbol{\epsilon}_{ m+1}^{(c-2)}+\left(\left\lfloor\frac{c}{2}\right\rfloor+1\right)\left( \boldsymbol{\epsilon}_{m+1}^{(c-1)}-\boldsymbol{\epsilon}_{m}^{(c-1)}\right)^ {-1}. \tag{5}\]
We employ the Samelson vector inverse \(\mathbf{a}^{-1}=\frac{\mathbf{a}}{\mathbf{a}^{\dagger}\mathbf{a}}\) as suggested by Wynn (1962). Using these definitions, the sequence \(\boldsymbol{\epsilon}_{m}^{(2l)}\) for \(m=0,1,2,\cdots\) is the sequence of partial sums of the series \(\mathbf{a}_{k}\) accelerated \(l\) times. Thus, we expect the most accurate approximation of (4) to be given by maximising \(l\) and \(m\), acknowledging there is a corresponding increase in computational cost. Pseudo-code for series acceleration is provided in Appendix A.5.
Algorithm 1 incorporates all these elements to form a complete neural network optimisation algorithm. While expanding the series of (4) to a large number of terms may be arbitrarily expensive, we show in the next Section that useful progress can be made on tractable timescales.
```
whiletraining continues do Compute training loss, gradient \(\mathbf{g}\), Hessian \(\mathbf{H}\) \(V\leftarrow\max\left\{V,\frac{\left\|\mathbf{H}^{2}\mathbf{g}\right\|}{\| \mathbf{g}\|}\right\}\) \(\mathbf{a}_{0},\mathbf{s}_{0}\leftarrow\mathbf{g}\) for\(k\gets 1\)to\(K-1\)do \(\mathbf{a}_{k}\leftarrow\frac{2k(2k-1)}{4k^{2}}\left(\mathbf{a}_{k-1}-\frac{1}{V} \mathbf{H}\mathbf{H}\mathbf{a}_{k-1}\right)\) \(\mathbf{s}_{k}\leftarrow\mathbf{s}_{k-1}+\mathbf{a}_{k}\) endfor Compute final term \(\hat{\mathbf{s}}_{\infty}\) after \(N\) accelerations of the series \(\mathbf{s}_{K-1-2N},\mathbf{s}_{K-2N},\cdots,\mathbf{s}_{K-1}\) (See Algorithm 2 in Appendix A.5) \(\mathbf{w}\leftarrow\mathbf{w}-\frac{\eta}{\sqrt{V}}\hat{\mathbf{s}}_{\infty}\) endwhile
```
**Algorithm 1** Series of Hessian-Vector Products for Tractable Saddle-Free Newton Optimisation
## 4 Experiments
We now move on to empirical evaluation of our algorithm. For all experiments, we use ASHA (Li et al., 2020) to tune each algorithm and dataset combination on the validation loss, sampling 100 random hyperparameter configurations and setting the maximum available budget based on the model and data combination. Further experimental details and the final hyperparameter settings for all experiments can be found in Appendix A.3, with code available at [http://github.com/rmclarke/SeriesOfHessianVectorProducts](http://github.com/rmclarke/SeriesOfHessianVectorProducts).
We will begin by considering UCI Energy (Tsanas and Xifara, 2012), which is small enough to allow an exact implementation of our algorithm (using eigendecompositions instead of the Neumann series approximation) as a proof of concept, and lends itself to the full-batch setting -- the best case scenario for second-order methods. We then move to a setting without these conveniences, namely Fashion-MNIST (Xiao et al., 2017), which is large enough to require require mini-batching and has too many parameters to allow for exact computation of the Hessian. We go on to increasingly difficult scenarios, in terms of both model and dataset size, by considering SVHN (Netzer et al., 2011) and CIFAR-10 (Krizhevsky, 2009) using ResNet-18 architectures.
For UCI Energy, we generate a random dataset split using the same sizes as Gal and Ghahramani (2016); for Fashion-MNIST, SVHN and CIFAR-10 we separate the standard test set and randomly choose \(\frac{1}{6}\), \(\frac{1}{6}\) and \(\frac{1}{10}\) (respectively) of the remaining data to form the validation set. The numerical data for all experiments can be found in Appendix B.1. While we will usually present wall-clock time on the \(x\)-axis, plots with iteration steps on the \(x\)-axis are available in Appendix B.2.
For all experiments, we present both training and test loss. The optimisation literature often focuses only on the objective function at hand, i.e. the training loss, since a strong optimiser should be able to solve the function it is given. However, in machine learning our target is always to generalise, i.e. to do well on the unseen test set as a measure of generalisation, and the training loss is only a means toward this end. Since we hope to apply our method to deep learning methods, we consider it important to present both these metrics together.
### UCI Energy
We begin with a small-scale experiment on UCI Energy as a proof of concept, training for \(6\,000\) full-batch training epochs. We compare our algorithm to a number of baselines1:
Footnote 1: We also include an L-BFGS (Liu and Nocedal, 1989) baseline in Appendix B.3
**Exact SFN**: Full-Hessian implementation of the absolute-value eigenvalue strategy of Pascanu et al. (2014), where we compute the eigenvalue decomposition and take the absolute value of the eigenvalues. We additionally replace eigenvalues near zero with a small constant and then compute the exact inverse of the resulting saddle-free Hessian. For this method, we tune the learning rate, momentum, the threshold for replacing small eigenvalues, and constant which replaces the small eigenvalues.
**Ours**: Our implementation of Algorithm 1, using tuned learning rate, momentum, series length \(K\) and order of acceleration \(N\). As described in Section 3.4, we adapt \(V\) using the loose bound \(V\geq\frac{\|\mathbf{h}^{2}\mathbf{g}\|}{\|\mathbf{g}\|}\) starting with an initial value of \(100\), as we found minimal benefit to explicitly tuning \(V\). We also considered more accurate approximations to \(V\) that would attain a tighter bound (such as computing the largest eigenvalue using power iteration), but found these held little to no benefit.
**SGD**: Classical stochastic gradient descent, with a tuned learning rate.
**Adam**: (Kingma and Ba, 2015) We tune all the parameters, i.e. learning rate, \(\epsilon\), \(\beta_{1}\) and \(\beta_{2}\).
**KFAC**: **(DeepMind)**: (Martens and Grosse, 2015) We use the implementation of Botev and Martens (2022) which includes adaptive learning rates, momentum, and damping; we tune the initial damping.
The first algorithm above is an exact version of our algorithm, which is tractable in this particular setting. We also considered including an exact implementation of the Newton second-order update but this diverged rapidly, presumably due to the non-convexity of the optimisation task, so we do not include it here.
Figure 2 shows the training and test losses both in terms of wall-clock time and as a function of the number of optimisation steps. Exact SFN achieves the best training loss, as we may hope from it being an exact SFN method. This is encouraging, since it provides proof of concept that our approach is generally sensible in the exact setting. However, it does not converge as quickly as may be desired -- in comparison, KFAC (DeepMind) and Adam make much faster progress, even when considering the change in loss per iteration, rather than wall-time. Our algorithm deflects from the Exact SFN trend, as its approximate nature would suggest, but does not approach the performance exhibited by KFAC DeepMind.
KFAC DeepMind includes clever adaptation mechanisms and smoothing of the curvature matrix which may give it an advantage over the other algorithms. To investigate this, we include additional variants on the baselines:
**Exact SFN (Adaptive)**: Same as Exact SFN, but with adaptive learning rate, momentum and damping strategies as used by KFAC (DeepMind) (see Section 2 and Appendix A.4 for details), as well as an exponential moving average of the curvature matrix. We tune only the initial damping, which subsumes the need for manually replacing small eigenvalues with a constant.
**Ours (Adaptive)**: Our implementation of Algorithm 1, incorporating the adaptive learning rate, momentum and damping used by KFAC (DeepMind). We tune the initial damping, number of update steps and order of acceleration.
**KFAC (Kazuki)**: (Martens & Grosse, 2015) This corresponds to the default settings for KFAC in Osawa. This version of KFAC is not adaptive and does not smooth the curvature matrix by means of averaging. We tune the damping, learning rate and momentum.
Figure 3 shows the training and test loss profiles in wall-clock time. The best test and training losses are now achieved by Exact SFN (Adaptive). We note that this adaptive version of Exact SFN converges considerably faster than the non-adaptive version, reinforcing our and Martens & Grosse's views on the importance of adapting the learning rate, momentum and damping.
Figure 2: Median training (left) and test (right) MSEs achieved over wall-clock time (top) and training iterations (bottom) on UCI Energy by various optimisers in the full-batch setting, bootstrap-sampled from 50 random seeds. Optimal hyperparameters were tuned with ASHA. Note the logarithmic horizontal axes.
In all cases (KFAC, Exact SFN and Ours), the adaptive version of the algorithm performs significantly better than the non-adaptive version. Although our adaptive algorithm matches Exact SFN and beats SGD and both KFAC versions in terms of final test loss, it is still surpassed by Adam and SFN Exact (Adaptive). Notably, our non-adaptive algorithm does not match the performance of SFN Exact, neither does our adaptive algorithm match SFN Exact (Adaptive). Clearly, we sacrifice training performance by using an approximation to Exact SFN and by not smoothing the curvature matrix.
KFAC (DeepMind) achieves the second best training loss, though not test loss. KFAC (Kazuki) diverges quickly at the start of training, which is also unexpected given that its hyperparameters were tuned and that it behaves reasonably on the later, more difficult problems. We hypothesise that adaptive parameters are an important component of its behaviour and that this setting does not lend itself well to fixed parameters (which is supported by the observation that all the adaptive versions performed better than their non-adaptive counterparts).
In this setting, it seems that short of using exact Hessians, Adam is the best choice of optimiser, displaying the second-best training and test losses and completing faster (in wall-clock time) than the second-order methods. However, we are encouraged that our adaptive algorithm's performance is not far off the exact version and continue to more realistic settings in the sections that follow.
### Larger Scale Experiments
Most practical applications are too large to permit full-batch training and so the remainder of our experiments incorporate mini-batching. Since second-order methods may benefit from larger batch sizes, we tune for batch size, choosing from the set \(\{50,100,200,400,800,1600,3200\}\).
We show the best (lowest) losses achieved by each algorithm in each problem setting in Figure 5 as well as the training and test loss profiles in Figure 4. Although KFAC (DeepMind) usually attains the best training loss, there is no clear consistent winner in terms of the best test loss achieved across all problems, despite each algorithm having been tuned specifically for each problem.
Surprisingly, KFAC (DeepMind) performs poorly on Fashion-MNIST, where KFAC (Kazuki) and Adam perform well. First-order optimisers seem well-suited to SVHN, where SGD and Adam achieve the best test losses. On CIFAR-10, Adam and the two KFAC variants perform about the same in terms of training loss, but KFAC (DeepMind) performs significantly better in terms of test loss.
Figure 3: Median training (left) and test (right) MSEs plotted against the log of wall-clock time, with additional optimisers included. Results are on UCI Energy in the full-batch setting and are bootstrap-sampled from 50 random seeds. Optimal hyperparameters were tuned with ASHA.
Figure 4: Median training (left) and test (right) loss achieved on Fashion-MNIST (top), SVHN (centre) and CIFAR-10 (bottom) by various optimisers using the optimal hyperparameters chosen by ASHA. Values are bootstrap-sampled from 50 random seeds.
Our findings seem to validate the widespread use of Adam in practice, given its simplicity as compared to KFAC (DeepMind). However, the performance of KFAC (DeepMind) on CIFAR-10 does indicate that there may be benefit to considering second-order optimisers more seriously.
Although our method is not the best on any of the datasets, its performance is not far from that of the other methods. Where the KFAC variants and Adam occasionally diverge during training (see Figure 4), our method is reasonably stable. Moreover, by leveraging large batch sizes, we converge in fewer epochs and less time than the first-order methods in some settings (e.g. Ours (Adaptive) on Fashion-MNIST is faster than both SGD and Adam), despite the additional complexity of our algorithm.
### Discussion
We posit that the gap between our performance and expected gains is due to error in our series approximation, of which there are two sources. The first is truncation error, which can be reduced to some extent by increasing the number of terms in the series, though the potency of this will depend on how slow the series is to converge. The second is numerical error: if the Hessian is poorly conditioned, then the repeated multiplications required to compute more terms may cause the series to diverge -- even if we have chosen \(V\) appropriately, so that the series should converge in theory.
By increasing the number of terms in the series, we can test whether the error is due to truncating the series or numerical error. From our experiment in Appendix B.4 examining the effect of truncation length, we find that for UCI Energy, increasing the number of terms in the series improves performance. However, for the larger-scale problems, we found that increasing the number of steps in the series to be arbitrarily large did not necessarily lead to improved performance. There is thus a trade-off between choosing a sufficiently high number of steps to approximate the desired matrix and choosing sufficiently few to avoid numerical issues. Strategies to improve conditioning of the Hessian may also help to improve this trade-off.
We consider KFAC, which also approximates the curvature, yet proves quite successful on the benchmark suite.2 KFAC's Kronecker factorisation supports smoothing the curvature estimate with a moving average, which reduces the impact of occasional, poor-quality approximations. Unfortunately, our full-Hessian approximation cannot support such smoothing due to storage requirements. Moreover, KFAC's approximation (which discards the off-diagonal blocks) can be understood intuitively as ignoring the correlations between weights of different layers. In contrast, the rate of convergence of our series varies throughout optimisation, and the impact of truncating the series on the resulting curvature matrix is more difficult to intuit. It may prove fruitful to leverage the same block-diagonal approximation in our method, but with smaller matrices at less risk of ill-conditioning. This would also allow the use of smoothing, which may further improve performance.
Footnote 2: In fact, based on KFAC’s performance in all our experiments, we found it surprising that KFAC is not more widely utilised in practice. That said, we also found its performance to vary widely between implementations, which may explain this observation.
There are links between our series approximation and the conjugate gradient (CG) method (Hestenes and Stiefel, 1952). CG solves a linear system of the form \(\mathbf{Ax}=\mathbf{b}\) iteratively. At the \(k\)-th iteration, CG finds the best \(\mathbf{x}\) in the \(k\)-th Krylov subspace (where the \(k\)-th Krylov subspace is the subspace generated by repeated applications of \(\mathbf{A}\) to the residual \(\mathbf{r}\), i.e. \(\mathcal{K}_{k}=\text{span}\{\mathbf{r},\mathbf{Ar},...,\mathbf{A}^{k-1} \mathbf{r}\}\)) where \(\mathbf{r}=\mathbf{b}-\mathbf{Ax}_{0}\). The inverse Neumann approximation truncated at the \(k\)-th term also finds a vector in the \(k\)-th Krylov subspace, but it is not guaranteed to be the optimal one, and so we may expect the Neumann approximation to be worse than CG.3 Although the series we present in (3) is slightly different, since it is computing the square and square-root at the same time as the inverse, we note that this may provide a clue as to the poor convergence behaviour of the series in general. Future work may consider leveraging insights from the conjugate gradient method to better approximate the inverted saddle-free Hessian.
Footnote 3: However, there is literature showing that Neumann series are more stable than CG in neural networks (Shaban et al., 2019; Liao et al., 2018)
## 5 Conclusions
In this work, we have motivated, derived and justified an approach to implementing Saddle-Free Newton optimisation of neural networks. By development of an infinite series, we are able to take the absolute
Figure 5: Ranking of optimisers according to lowest training (left) and test (right) losses achieved on Fashion-MNIST (top), SVHN (centre) and CIFAR-10 (bottom). Error bars show standard error in the mean. Values are the minimum of the loss profile across time, generated by bootstrap sampling from 50 random seeds.
values of Hessian eigenvalues without any explicit decomposition. With the additional aid of Hessian-vector products, we further avoid any explicit representation of the Hessian. To our knowledge, this is the first approximate second-order method to alter Hessian eigenvalues with an asymptotic exactness guarantee, and whose convergence is limited by compute time rather than available memory. Our algorithm tractably scales to larger networks and datasets, and although it does not consistently outperform Adam or a well-engineered KFAC implementation, its behaviour is comparable to these baselines, in terms of test loss and run time.
Improvements to the inverse approximation such as leveraging Kronecker factorisation or ideas from the conjugate gradient method may provide fruitful avenues of research for future saddle-free Hessian-based optimisation algorithms such as ours. Strategies to reduce numerical error, such as methods to improve the condition number of the Hessian, should also be investigated. Our findings generally support the widespread use of Adam, which performed well on most benchmarks, often beating KFAC despite being a much simpler algorithm. However, the strong performance of KFAC on CIFAR-10, our most complex benchmark, indicates that there may yet be significant gains by applying second-order methods to deep learning.
## Acknowledgements
We acknowledge computation provided by the CSD3 operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Countil (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).
Ross Clarke acknowledges funding from the Engineering and Physical Sciences Research Council (project reference 2107369, grant EP/S515334/1).
|
2307.14464 | Single Channel Speech Enhancement Using U-Net Spiking Neural Networks | Speech enhancement (SE) is crucial for reliable communication devices or
robust speech recognition systems. Although conventional artificial neural
networks (ANN) have demonstrated remarkable performance in SE, they require
significant computational power, along with high energy costs. In this paper,
we propose a novel approach to SE using a spiking neural network (SNN) based on
a U-Net architecture. SNNs are suitable for processing data with a temporal
dimension, such as speech, and are known for their energy-efficient
implementation on neuromorphic hardware. As such, SNNs are thus interesting
candidates for real-time applications on devices with limited resources. The
primary objective of the current work is to develop an SNN-based model with
comparable performance to a state-of-the-art ANN model for SE. We train a deep
SNN using surrogate-gradient-based optimization and evaluate its performance
using perceptual objective tests under different signal-to-noise ratios and
real-world noise conditions. Our results demonstrate that the proposed
energy-efficient SNN model outperforms the Intel Neuromorphic Deep Noise
Suppression Challenge (Intel N-DNS Challenge) baseline solution and achieves
acceptable performance compared to an equivalent ANN model. | Abir Riahi, Ãric Plourde | 2023-07-26T19:10:29Z | http://arxiv.org/abs/2307.14464v1 | # Single Channel Speech Enhancement Using U-Net Spiking Neural Networks
###### Abstract
Speech enhancement (SE) is crucial for reliable communication devices or robust speech recognition systems. Although conventional artificial neural networks (ANN) have demonstrated remarkable performance in SE, they require significant computational power, along with high energy costs. In this paper, we propose a novel approach to SE using a spiking neural network (SNN) based on a U-Net architecture. SNNs are suitable for processing data with a temporal dimension, such as speech, and are known for their energy-efficient implementation on neuromorphic hardware. As such, SNNs are thus interesting candidates for real-time applications on devices with limited resources. The primary objective of the current work is to develop an SNN-based model with comparable performance to a state-of-the-art ANN model for SE. We train a deep SNN using surrogate-gradient-based optimization and evaluate its performance using perceptual objective tests under different signal-to-noise ratios and real-world noise conditions. Our results demonstrate that the proposed energy-efficient SNN model outperforms the Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge) baseline solution and achieves acceptable performance compared to an equivalent ANN model.
speech enhancement, spiking neural network, surrogate gradient
## I Introduction
Speech enhancement (SE) is an important signal processing task that improves speech quality by removing background additive noises while maintaining speech intelligibility. SE is essential for many applications such as speech recognition, communication devices, video conferencing systems and hearing aids.
Classic SE techniques such as spectral subtraction [1], wiener filtering [2], minimum mean square error estimation [3], and subspace methods [4] have been widely used and have shown to be effective in certain situations. However, they have limitations when dealing with non-stationary noise and low Signal-to-Noise Ratio (SNR) conditions. Deep learning models, on the other hand, have shown promising results in addressing these challenges by leveraging large amounts of training data to learn complex mappings between noisy and clean speech signals [5].
In fact, in recent years, deep learning architectures demonstrated great potential to deploy robust SE systems in real-world environments [6]. The success of conventional artificial neural networks (ANN) can be attributed to the availability of large datasets and high-performance computational resources. However, most SE applications require real-time computation on limited resource devices such as cellphones, which poses a significant challenge to deploy ANNs. In fact, these networks generally require significant computational resources and massive energy consumption.
Spiking neural networks (SNN), which are low-power deep neural networks, have emerged as a promising alternative to ANNs. SNNs offer comparable computational complexity to ANNs while consuming significantly less energy. The asynchronous, binary, and sparse event-driven processing of SNNs is one of the primary reasons behind their energy efficiency on neuromorphic hardware. SNN training poses a significant challenge due to the non-differentiable nature of its spiking function. Consequently, extensive research efforts have been directed towards exploring novel SNN learning methods. Recently, a novel technique known as surrogate gradient optimization [9] has demonstrated remarkable robustness in training deep SNN architectures using backpropagation on a wide range of tasks, including classification [10], speech command recognition [11], depth estimation [12], among others.
However, a very limited number of research works have been done for SE using SNNs [13, 14, 15]. Among these, Wall _et al._[13] propose a three-layer SNN architecture that employs lateral inhibition. To generate the training dataset, the authors introduce three levels of additive Gaussian white noise and subsequently compute the Short Time Fourier Transform (STFT) of the signal. The log scaled STFT magnitude is then encoded into discrete spike timing using the Bens Spiker Algorithm (BSA) [16]. The proposed SNN model then processes the encoded input spikes using a masking approach to eliminate uncorrelated spikes. In other words, the enhanced STFT is obtained by performing an element-wise multiplication of the complex noisy STFT with the SNN's output spike train. The results of their experiments demonstrate favorable performance in terms of SNR. However, the proposed approach has some limitations. Specifically, the SNN utilized in this study does not incorporate any form of learning, and the architecture is relatively shallow. Furthermore, the proposed model could have been subjected to more extensive testing using a diverse array of noise types to validate its generalizability. Xing _et
al._[14] propose a similar approach by leveraging a three-layer SNN architecture that incorporates lateral inhibition. The authors utilize a log scaled STFT magnitude as the input current for the SNN model, which then generates a binary mask. The enhanced STFT is obtained by computing an element-wise multiplication with the binary mask. Their experimental design includes the incorporation of five distinct real-world noise types, and the resulting SE model demonstrates good performance with respect to SNR. However, the SE process relies on the elimination of uncorrelated noise components, and the SNN architecture does not integrate any learning strategies. A more recent work by Intel developed a simple SNN-based baseline solution in the context of the Intel Neuromorphic Deep Noise Suppression Challenge (Intel N-DNS Challenge) [15]. The proposed method utilizes a three-layer feedforward sigma-delta neural network (SDNN) to mask the STFT Magnitude. The delta-encoded STFT magnitude is employed as the input to the SNN, which generates a multiplicative mask for computing the enhanced STFT. The baseline SNN is trained using the surrogate gradient method.
We propose a novel approach for single-channel SE in real-world noise conditions using a supervised SNN framework and a U-Net inspired architecture with direct input mapping. The current work presents several important contributions toward the development of SNNs for SE that address limitations of prior studies.
Firstly, we propose an SNN-based model for SE, which learns to map the logarithmic power spectrum (LPS) of noisy speech to that of clean speech. This represents a significant advancement in the field, as to the best of our knowledge, this is the first attempt to apply an SNN architecture to the SE task using a direct mapping strategy instead of masking. Secondly, we employ a direct encoding approach that allows the SNN to simultaneously encode acoustic features of speech and suppress noise, which has not been explored in previous SNN-based SE models. Thirdly, we use trainable neuron parameters (decay strength and membrane threshold) for enhanced learning. Finally, the performance of the proposed SNN model is compared to an ANN model with a similar architecture, demonstrating slightly lower but still comparable performance.
This work further highlights the potential of SNNs in the SE task and supports the notion that SNNs can provide a viable energy-efficient alternative to conventional ANN-based models. Overall, it contributes towards advancing the development of SNNs for SE applications, with potential implications for the broader field of speech processing and communication technologies.
## II Background
### _Spiking neural network_
SNNs are a type of neural network that model the behavior of biological neurons [17]. Unlike traditional ANNs that use continuous valued activation functions, SNNs use discrete spikes to communicate information between neurons. The dynamics of SNNs are governed by the spiking behavior of individual neurons, which emit a spike when their membrane potential reaches a threshold. The spike trains then propagate to post-synaptic neurons, where they contribute to the computation of post-synaptic currents.
The Leaky Integrate-and-Fire (LIF) neuron model with current-based synapses [18] is a mathematical model widely used in SNNs due to its simplicity and computational efficiency. The dynamics of a LIF neuron \(i\) in the layer \(l\) can be described as:
\[I_{i}^{(l)}(t+1)=\alpha I_{i}^{(l)}(t)+\sum_{j}W_{ij}^{(l)}S_{j}^{(l-1)}(t)+ \sum_{j}V_{ij}^{(l)}S_{j}^{(l)}(t) \tag{1}\]
\[U_{i}^{(l)}(t+1)=\beta U_{i}^{(l)}(t)+I_{i}^{(l)}(t)-U_{th}S_{i}^{(l)}(t) \tag{2}\]
\[S_{i}^{(l)}(t)=\Theta(U_{i}^{(l)}(t)-U_{th}) \tag{3}\]
where \(I_{i}^{(l)}(t)\) represents the input current at \(t\), \(U_{i}^{(l)}(t)\) represents the membrane potential at \(t\), \(S_{i}^{(l)}(t)\) represents the spike train at \(t\), \(W_{ij}^{(l)}\) and \(V_{ij}^{(l)}\) are synaptic weight matrices for the feedforward and recurrent connections, respectively. \(\alpha=exp(-\frac{\Delta_{t}}{\tau_{syn}})\) and \(\beta=exp(-\frac{\Delta_{t}}{\tau_{mem}})\) are the decay strengths for the input current and the membrane potential, respectively.
The spike behavior of individual neurons is described by Equation 3, where \(\Theta\) is the Heaviside step function that outputs a spike when the membrane potential of the neuron exceeds a threshold \(U_{th}\).
### _Surrogate Gradient_
Surrogate gradients have recently emerged as a powerful tool for training SNNs directly using backpropagation [9, 10]. In traditional ANNs, backpropagation computes the gradient of the loss function with respect to the weights of the network using the chain rule. However, this method cannot be directly applied to SNNs because the spiking behavior of neurons is non-differentiable, making it impossible to compute gradients through the spiking activation function. Surrogate gradients provide an alternative way to compute gradients in SNNs by approximating the non-differentiable spiking activation function with a differentiable function. This allows for the use of backpropagation to update the weights of the network during training.
## III Method
### _System overview_
In this work, we propose an SNN-based approach for SE as depicted in Figure 1. The proposed system operates in the time-frequency domain, where the STFT is used to decompose the input speech signal into its constituent spectral components. Subsequently, the magnitude of the STFT is computed and transformed into LPS using the logarithmic and power functions. The SNN model is trained to learn a non-linear mapping from the noisy input LPS to an enhanced LPS, leveraging the inherent non-linear properties of the underlying speech signal. The inverse transform of the SNN output is computed using the exponential and square root operations, yielding the enhanced STFT magnitude. Finally, the enhanced
STFT magnitude is combined with the noisy STFT phase to reconstruct the enhanced speech signal.
### _Direct input encoding_
In this work, we adopt the direct encoding approach [19, 20, 21]. We apply the noisy LPS directly as input to the SNN, which is then directly encoded into binary spikes using the LIF neurons within the first layer of the network. The direct encoding method has been shown to improve activation sparsity and energy efficiency of SNNs, as reported in [21]. It provides better control of input information flow by attenuating irrelevant inputs and increasing activation sparsity.
### _Model architecture_
Recently, the U-Net architecture, which is widely employed in image segmentation [22], has been adapted for the task of SE using ANNs [23, 24, 25]. The proposed SNN architecture (shown in Figure 2) is composed of layers arranged in a U-shaped configuration, with skip connections between the encoder and decoder. The encoder component of the network extracts relevant features from the input LPS, whereas the decoder generates the enhanced LPS.
The SNN takes as input the LPS and consists of eight encoder layers, followed by seven decoder layers, and a final readout layer. Downsampling within the encoder is accomplished via strided convolutions. The decoder employs the neuromorphic-friendly nearest neighbor method for upsampling. The readout layer is composed of non-spiking neurons.
The neurons in this SNN are modeled using LIF neurons, which emulates the fundamental behavior of biological neurons. The LIF neurons accumulate input signals over time and emit a spike when the membrane potential reaches a predefined threshold. To enable efficient training of our SNN using backpropagation, we use a differentiable approximation of the spiking activation function, which is the Heaviside step function. In this work, we use the derivative of the ArcTan function as a surrogate function. This approach enables us to efficiently optimize the parameters of our SNN using gradient-based methods while preserving the spiking behavior of the LIF neurons.
### _Loss function_
We use the log-spectral distance (LSD) loss:
\[L_{LSD}=\frac{1}{M}\sum_{m=0}^{M-1}\sqrt{\frac{1}{K}\sum_{k=0}^{K-1}(X[m,k]- \hat{X}[m,k])^{2}} \tag{4}\]
where \(X[m,k]\) and \(\hat{X}[m,k]\) represent, respectively, the clean and estimated LPS for the \(m^{th}\) time frame and \(k^{th}\) frequency bin.
## IV Experiments
### _Dataset_
In this study, we utilized a publicly available dataset [26, 27], which comprises recordings of both clean speech and noisy speech, with various types of noise added at different SNRs. The speech data is characterized by a gender-balanced composition of English speakers, with the clean speech recordings of sentences extracted from the voice bank corpus (VCTK) [28] with a sampling frequency of 48 kHz.
The training set of the dataset consists of approximately \(10\) hours of speech signals from \(28\) speakers, with \(10\) different types of noise added at SNR values of \(15\), \(10\), \(5\), and \(0\) dB. The types of noise include two artificial and eight real-world noise recordings obtained from the Demand dataset [29]. On the other hand, the test set comprises \(30\) minutes of speech signals from two speakers, with five types of real-world noise from the Demand dataset added at SNR values of \(17.5\), \(12.5\), \(7.5\), and \(2.5\) dB. We randomly split the training set into training and validation sets.
### _Data preprocessing_
In this study, a standard preprocessing procedure was applied, which included downsampling of input speech signals in the temporal domain to \(16\) kHz. STFT was computed on the speech signals using a frame length of \(32\) ms with a hop length of \(16\) ms and Hann window.
### _Training setup_
To train the model, we use the Adam optimizer [30] with a learning rate of \(0.002\), decay rates \(\beta_{1}=0.5\) and \(\beta_{2}=0.9\), and a batch size of \(32\) for \(60\) epochs. Weights of the convolutional layers are initialized from a normal distribution with zero mean and \(0.2\) standard deviation. Decay strengths and membrane threshold of the LIF neurons are initialized from a normal distribution with mean values of \(0.05\) and \(1.0\), respectively, and both with a standard deviation of \(0.01\).
### _Evaluation metrics_
Evaluating the performance of a SE system is a critical aspect of determining its efficacy in real-world scenarios. In this work, we propose the use of three widely used objective metrics to evaluate the performance of the proposed SNN-based SE system.
The first metric used is the Perceptual Estimation of Speech Quality (PESQ) [31], which has been widely adopted in SE literature to evaluate the similarity between the enhanced speech signal and the original speech signal. PESQ is a perceptual metric that is intended to correlate well with human perception of speech quality. It ranges from \(-0.5\) to \(4.5\). The second metric used is the Short Term Objective Intelligibility (STOI) [32]. STOI is a measure of the intelligibility of the speech signal and has been shown to be highly correlated with human speech intelligibility judgments. It ranges from \(0\) to \(1\). Lastly, we used the Deep Noise Suppression Mean Opinion Score (DNSMOS) ANN-based estimator [33]. DNSMOS comprises three scores, namely, the quality of the speech signal (SIG),
Fig. 1: Speech enhancement system overview.
the background noise (BAK), and the overall quality of the enhanced speech (OVRL). These scores range from \(1\) to \(5\) For all the metrics used, a higher score indicates a better performance of the algorithm.
## V Results
In this section, we present the results of the proposed SNN-based speech enhancement approach and compare it against classical and state-of-the-art approaches. Table I summarizes the numerical results obtained from our experiments using the aforementioned metrics.
Primarily, the proposed approach demonstrates significant improvements across most of the evaluation metrics when compared to the unprocessed noisy speech signals, indicating its effectiveness in SE. Furthermore, the outcomes of the performance evaluation demonstrate that the proposed SNN approach surpasses several state-of-the-art techniques in terms of PESQ and has a comparable STOI value.
Moreover, we present an evaluation of the proposed SNN model in comparison with a recently published SNN-based approach, specifically the baseline SDNN model introduced in [15] for the Intel N-DNS Challenge. To ensure benchmarking consistency, we train and evaluate the SDNN model using the same dataset as the proposed model. The proposed U-Net based SNN significantly outperforms the SDNN baseline across all evaluation metrics.
It is worth noting that the authors of [15] report much better results in their original paper, however, the SDNN model was trained in [15] on a large corpus of \(500\) hours of audio data, whereas the dataset utilized in our study comprises only around \(10\) hours of audio. Therefore, our proposed approach seems to overcome the limitations of dataset size and variability. These findings attest to the efficacy of the proposed method for SE and suggest its potential for real-world applications.
Additionally, we compare the proposed SNN architecture with an ANN having an equivalent architecture. Our findings indicate that the proposed SNN approach achieves slightly lower but still comparable performance in terms of DNSMOS, demonstrating its efficacy in enhancing speech signals while employing fewer computational resources than the ANN-based approach.
It is important to note that while the proposed SNN approach has exhibited competitive performance for speech enhancement, it is essential to conduct further investigations to explore its performance under diverse frameworks, such as different speech encoder, neuron model, loss function, or using a masking based method. Such investigations would help to elucidate the effectiveness of the proposed approach across a broader range of scenarios and enable a more thorough understanding of its capabilities and limitations.
## VI Conclusion
In this paper, we proposed a novel single-channel speech enhancement approach based on a U-Net SNN architecture. We show that SNN can perform large scale regression tasks such as speech enhancement. The objective evaluation results show that the proposed approach outperforms many state-of-the-art ANN-based methods. Also, it outperforms the Intel N-DNS Challenge SDNN baseline solution. Furthermore, we achieved competitive results in comparison with an equivalent ANN architecture, indicating the promising potential of SNNs for speech enhancement applications. Overall, the present study highlights the potential of SNNs in the domain of speech enhancement and provides a valuable contribution to the ongoing research efforts in this area.
Fig. 2: Proposed SNN architecture. |
2305.08807 | Smoothness and monotonicity constraints for neural networks using ICEnet | Deep neural networks have become an important tool for use in actuarial
tasks, due to the significant gains in accuracy provided by these techniques
compared to traditional methods, but also due to the close connection of these
models to the Generalized Linear Models (GLMs) currently used in industry.
Whereas constraining GLM parameters relating to insurance risk factors to be
smooth or exhibit monotonicity is trivial, methods to incorporate such
constraints into deep neural networks have not yet been developed. This is a
barrier for the adoption of neural networks in insurance practice since
actuaries often impose these constraints for commercial or statistical reasons.
In this work, we present a novel method for enforcing constraints within deep
neural network models, and we show how these models can be trained. Moreover,
we provide example applications using real-world datasets. We call our proposed
method ICEnet to emphasize the close link of our proposal to the individual
conditional expectation (ICE) model interpretability technique. | Ronald Richman, Mario Wüthrich | 2023-05-15T17:14:52Z | http://arxiv.org/abs/2305.08807v1 | # Smoothness and monotonicity constraints for neural networks using ICEnet
###### Abstract
Deep neural networks have become an important tool for use in actuarial tasks, due to the significant gains in accuracy provided by these techniques compared to traditional methods, but also due to the close connection of these models to the Generalized Linear Models (GLMs) currently used in industry. Whereas constraining GLM parameters relating to insurance risk factors to be smooth or exhibit monotonicity is trivial, methods to incorporate such constraints into deep neural networks have not yet been developed. This is a barrier for the adoption of neural networks in insurance practice since actuaries often impose these constraints for commercial or statistical reasons. In this work, we present a novel method for enforcing constraints within deep neural network models, and we show how these models can be trained. Moreover, we provide example applications using real-world datasets. We call our proposed method _ICEnet_ to emphasize the close link of our proposal to the individual conditional expectation (ICE) model interpretability technique.
**Keywords.** Smoothing, Whittaker-Henderson Smoothing, Graduation, Monotonicity, Deep Neural Networks, Constrained Likelihood, Individual Conditional Expectation
## 1 Introduction
Deep neural networks have recently emerged as a promising technique for use in tasks across the various traditional disciplines of actuarial science, including pricing, reserving, experience analysis and mortality forecasting. Moreover, deep learning has been applied in emerging areas of actuarial practice, such as analysis of telematics data, natural language processing and image recognition. These techniques provide significant gains in accuracy compared to traditional methods, while the close connection of these models to the Generalized Linear Models (GLMs) currently used in industry enable easier understanding of these models for classically trained actuaries than other machine learning paradigms such as boosted trees.
On the other hand, one seeming disadvantage of neural network models is that the output from these models may exhibit undesirable characteristics for actuarial purposes. A first issue is that predictions may vary in a rough manner with changes in insurance risk factors. In some contexts, such as general insurance pricing, this may be problematic to explain to customers,
intermediaries or other stakeholders, or may indicate problems with data credibility. For example, consider a general insurance pricing model that uses driver age as a risk factor. Usually, from a commercial perspective, as a customer ages, it is expected that her insurance rates would change smoothly, however, unconstrained output from neural networks may produce rates that vary roughly with age. It is difficult to explain to customers why rates might increase one year, then decrease the next, and, moreover, extra costs might arise from needing to address these types of queries. A second issue is that actuaries often wish to constrain predictions from models to increase (or decrease) in a monotonic manner with some risk factors, for example, increasing sums insured should lead, other things being equal, to higher average costs per claim and worsening bonus-malus scores should imply higher expected frequencies of claims. Whereas constraining GLM parameters relating to insurance risk factors to be smooth or exhibit monotonicity is trivial, since the coefficients of GLMs can be modified directly, methods to introduce these constraints into deep neural networks have not yet been developed. Thus, a significant barrier for the adoption of neural networks in practice exists.
Typical practice when manually building actuarial models is that actuaries will attempt to enforce smoothness or monotonicity constraints within models by modifying model structure or coefficients manually, as mentioned before, by applying relevant functional forms within models, such as parametric functions, regularization or by applying post-hoc smoothing techniques; we mention, e.g., Whittaker-Henderson smoothing (Whittaker, 1922; Henderson, 1924) which adds regularization to parameters. In the case of GLMs for general insurance pricing, the simple linear structure of these models is often exploited by, firstly, fitting an unconstrained GLM, then specifying some simplifications whereby coefficients are forced to follow the desired functional form or meet a smoothness criteria. In more complex models, such as neural networks, operating directly on model coefficients or structure is less feasible.
To overcome this challenge, here we present methods for enforcing smoothness and monotonicity constraints within deep neural network models. The key idea can be summarized as follows: as a first step insurance risk factors that should be constrained are identified, then, the datasets used for model fitting are augmented to produce pseudo-data that reflect the structure of the identified variables. In the next step, we design a multi-input/multi-output neural network that can process jointly the original observation as well as the pseudo-data. Finally, a joint loss function is used to train the network to make accurate predictions while enforcing the desired constraints on the pseudo-data. We show how these models can be trained and provide example applications using a real-world general insurance pricing dataset.
The method we propose can be related to the Individual Conditional Expectation (ICE) model interpretability technique of Goldstein et al. (2015), therefore, we call our proposal the _ICEnet_, to emphasize this connection. To enable the training of neural networks with constraints, the ICEnet structures networks to output a vector of predictions derived from the pseudo-data input to the network; these predictions derived from the network for the key variables of interest on the pseudo-data are exactly equivalent to the outputs used to derive an ICE plot. The selected constraints then constrain these ICEnet outputs from the network to be as smooth or monotonic as required. For this purpose, we will use Fully Connected Networks (FCNs) with embedding layers for categorical variables to perform supervised learning, and, in the process, create the ICEnet outputs for the variables of interest.
**Literature review.** Practicing actuaries have often smoothed results of experience analyses in
both general and life insurance, usually on the assumption that outputs of models that do not exhibit smoothness are anomalous. For example, Goldburd et al. (2016) and Anderson et al. (2007) discuss manual smoothing methods in the context of general insurance pricing with GLMs. In life insurance, two main types of techniques are used to graduate (i.e. smooth) mortality tables; in some jurisdictions, such as the United Kingdom and South Africa, combinations of polynomial functions have often been used to fit mortality data directly, see, for example, Forfar et al. (1987), whereas post-hoc smoothing methods such as Whittaker-Henderson smoothing (Whittaker, 1922; Henderson, 1924) have often been used in the United States. The use of splines and Generalized Additive Models (GAMs) have also been considered for these purposes, see Goldburd et al. (2016) in the context of general insurance and Debon et al. (2006) in the context of mortality data.
More recently, penalized regression techniques have been utilized for variable selection and categorical level fusion for general insurance pricing. These are often based on the Least Absolute Shrinkage and Selection Operator (LASSO) regression of Tibshirani (1996) and its extensions (Hastie et al., 2015); here we mention the fused LASSO which produces model coefficients that vary smoothly, see Tibshirani et al. (2005), which is particularly useful for deriving smoothly varying models for ordinal categorical data. In the actuarial literature, example applications of the constrained regression approach are Devriendt et al. (2021), who propose an algorithm for incorporating multiple penalties for complex general insurance pricing data and Henckaerts et al. (2018), who provide a data-driven binning approach designed to produce GLMs which closely mirror smooth GAMs.
Within the machine and deep learning literature, various methods for enforcing monotonicity constraints within Gradient Boosting Machines (GBM) and neural network models have been proposed. Monotonicity constraints are usually added to GBMs by modifying the process used to fit decision trees when producing these models; an example of this can be found in the well-known XGBoost library of Chen and Guestrin (2016). Since these constraints are implemented specifically by modifying how the decision trees underlying the GBM are fit to the data, the same process cannot be applied for neural network models. Within the deep learning literature, monotonicity constraints have been addressed by constraining the weights of neural networks to be positive, see, for example, Sill (1997) and Daniels and Velikova (2010), who generalize the earlier methods of Sill (1997), or by using specially designed networks, such as the lattice networks of You et al. (2017). In the finance literature, Kellner et al. (2022) proposes a method to ensure monotonicity of multiple quantiles output from a neural network, by adding a monotonicity constraint to multiple outputs from the network; this idea is similar to what we implement below. Another approach to enforcing monotonicity involves post-processing the outputs of machine learning models with a different algorithm, such as isotonic regression; see Wuthrich and Ziegel (2023) for a recent application of this to ensure that outputs of general insurance pricing models are autocalibrated.
On the other hand, the machine and deep learning literature seemingly has not addressed the issue of ensuring that predictions made with these models vary smoothly, and, moreover, the methods proposed within the literature for monotonicity constraints cannot be directly applied to enforce smoothness constraints. Thus, the ICEnet proposal of this work, which adds flexible penalties to a specially designed neural network, fills a gap in the literature by showing how both monotonicity and smoothness constraints can be enforced with the same method in deep
learning models.
**Structure of the manuscript.** The rest of the manuscript is structured as follows. Section 2 provides notation and discusses neural networks and machine learning interpretability. Section 3 defines the ICEnet, which is applied to the French Motor Third Party Liability data in Section 4. A local approximation to the ICEnet is presented in Section 5. Discussion of the results and conclusions are given in Section 6. The supplementary provides the code and further numerical analysis of the ICEnet.
## 2 Neural networks and Individual Conditional Expectations
We begin by briefly introducing supervised learning with neural networks, expand these definitions to FCNs using embedding layers and discuss their training process. With these building blocks, we then present the ICEnet proposal in the next section.
### Supervised learning and neural networks
We work in the usual setup of supervised learning. Independent observations \(y_{n}\in\mathbb{R}\) of a variable of interest have been made for instances (insurance policies) \(1\leq n\leq N\). In addition, covariates \(\mathbf{x}_{n}\) have been collected for all instances \(1\leq n\leq N\). These covariates can be used to create predictions \(\widehat{y}_{n}\in\mathbb{R}\) of the variable of interest. In what follows, note that we drop the subscript from \(\mathbf{x}_{n}\) for notational convenience. The covariates in \(\mathbf{x}\) are usually of two main types in actuarial tasks: the first of these are real-valued covariates \(\mathbf{x}^{[r]}\in\mathbb{R}^{q_{r}}\), where the superscript \([\cdot]\) represents the subset of the vector \(\mathbf{x}\), \(r\) is the set of real-valued covariates and where there are \(q_{r}\) real-valued variables. The second type of covariates are categorical, which we assume have been coded as positive integers; we represent these as \(\mathbf{x}^{[c]}\in\mathbb{N}^{q_{c}}\), where the set of categorical covariates is \(c\). Thus, \(\mathbf{x}=(\mathbf{x}^{[r]},\mathbf{x}^{[c]})\). In actuarial applications, often predictions are proportional to a scalar unit of exposure \(v_{n}\in\mathbb{R}^{+}\), thus, each observation is a tensor \((y,\mathbf{x}^{[r]},\mathbf{x}^{[c]},v)\in\mathbb{R}\times\mathbb{R}^{q_{r}}\times \mathbb{N}^{q_{c}}\times\mathbb{R}^{+}\). In this work, we will use deep neural networks, which are efficient function approximators, for predicting \(y\). We represent the general class of neural networks as \(\Psi_{W}(\mathbf{x})\), where \(W\) are the network parameters (weights and biases). Using these, we aim to study the regression function
\[\Psi_{W}:\mathbb{R}^{q_{r}}\times\mathbb{N}^{q_{c}}\rightarrow\mathbb{R}, \qquad\mathbf{x}\ \mapsto\ \widehat{y}=\Psi_{W}(\mathbf{x})\,v.\]
We follow Chapter 7 of Wuthrich and Merz (2023) for the notation defining neural networks. Neural networks are machine learning models constructed by composing non-linear functions (called layers) operating on the vector \(\mathbf{x}\), which is the input to the network. A network consisting of only a single layer of non-linear functions has depth \(d=1\), and is called a shallow network. More complex networks consisting of multiple layers with depth \(d\geq 2\) are called deep networks. We denote the \(i\)-th layer by \(\mathbf{z}^{(i)}\). These non-linear functions (layers) transform the input variables \(\mathbf{x}\) into new representations, which are optimized to perform well on the supervised learning task, using a process called representation learning. Representation learning can be denoted as the composition
\[\mathbf{x}\ \mapsto\ \mathbf{z}^{(d:1)}(\mathbf{x})\ \stackrel{{\text{\tiny def }}}{{=}}\ \left(\mathbf{z}^{(d)}\circ\cdots\circ\mathbf{z}^{(1)}\right)(\mathbf{x})\ \in\ \mathbb{R}^{q_{d}},\]
where \(d\in\mathbb{N}\) is the number of layers \(\mathbf{z}^{(i)}\) of the network and \(q_{i}\in\mathbb{N}\) are the dimensions of these layers for \(1\leq i\leq d\). Thus, each layer \(\mathbf{z}^{(i)}:\mathbb{R}^{q_{i-1}}\to\mathbb{R}^{q_{i}}\) transforms the representation at the previous stage to a new, modified representation.
FCNs define the \(j\)-th component (called neuron or unit) of each layer \(\mathbf{z}^{(i)}\), \(1\leq j\leq q_{i}\), as the mapping
\[\mathbf{z}=(z_{1},\ldots,z_{q_{i-1}})^{\top}\in\mathbb{R}^{q_{i-1}}\quad\mapsto \quad z_{j}^{(i)}(\mathbf{z})=\phi\left(\sum_{k=1}^{q_{i-1}}w_{j,k}^{(i)}z_{k}+b_{ j}^{(i)}\right), \tag{2.1}\]
for a non-linear activation function \(\phi:\mathbb{R}\to\mathbb{R}\), and where \(z_{j}^{(i)}(\cdot)\) is the \(j\)-th component of layer \(\mathbf{z}^{(i)}(\cdot)\), \(w_{j,k}^{(i)}\in\mathbb{R}\) is the regression weight of this \(j\)-th neuron connecting to the \(k\)-th neuron of the previous layer, \(z_{k}=z_{k}^{(i-1)}\), and \(b_{j}^{(i)}\in\mathbb{R}\) is the intercept or bias for the \(j\)-th neuron in layer \(\mathbf{z}^{(i)}\). It can be seen from (2.1) that the neurons \(z_{j}^{(i)}(\cdot)\) of a FCN connect to all of the neurons \(z_{k}^{(i-1)}(\cdot)\) in the previous layer \(\mathbf{z}^{(i-1)}\) through the weights \(w_{j,k}^{(i)}\), explaining the description of these networks as "fully-connected".
Combining these layers, a generic FCN regression function can be defined as follows
\[\mathbf{x}\ \mapsto\ \Psi_{W}(\mathbf{x})\ \overset{\text{\tiny def}}{=}\ g^{-1} \left(\sum_{k=1}^{q_{d}}w_{k}^{(d+1)}z_{k}^{(d:1)}(\mathbf{x})+b^{(d+1)}\right), \tag{2.2}\]
where \(g^{-1}(\cdot)\) is a suitably chosen inverse link function that transforms the outputs of the network to the scale of the observations \(y\). The notation \(W\) in \(\Psi_{W}(\mathbf{x})\) indicates that we collect all the weights \(w_{j,k}^{(i)}\) and biases \(b_{j}^{(i)}\) in \(W\), giving us a network parameter of dimension \((q_{d}+1)+\sum_{i=1}^{d}(q_{i-1}+1)q_{i}\), where \(q_{0}\) is the dimension of the input \(\mathbf{x}\).
For most supervised learning applications in actuarial work, a neural network of the form (2.1)-(2.2) is applied to the covariate \(\mathbf{x}\) to create a single prediction \(\widehat{y}=\Psi_{W}(\mathbf{x})v\). Below, we will define how the same network \(\Psi_{W}(\cdot)\) can be applied to a vector of observations to produce multiple predictions, which will be used to constrain the network.
State-of-the-art neural network calibration is performed using a Stochastic Gradient Descent (SGD) algorithm, performed on mini-batches of observations. To calibrate the network, an appropriate loss function \(L(\cdot,\cdot)\) must be selected. For general insurance pricing, the loss function is often the deviance loss function of an exponential-family distribution, such as the Poisson distribution for frequency modelling or the Gamma distribution for severity modelling. For more details on the SGD procedure, we refer to Goodfellow et al. (2016), and for a detailed explanation of exponential-family modelling for actuarial purposes see Chapters 2 and 4 of Wuthrich and Merz (2023).
### Pre-processing covariates for FCNs
For the following, we assume that the \(N\) instances of \(\mathbf{x}_{n}\), \(1\leq n\leq N\), have been collected into a matrix \(X=[X^{[r]},X^{[c]}]\in\mathbb{R}^{N\times(q_{r}+q_{c})}\), where \(X^{[\cdot]}\) represents a subset of the columns of \(X\). Thus, to select the \(j\)-th column of \(X\), we write \(X^{[j]}\) and, furthermore, to represent the \(n\)-th row of \(X\), we will write \(X_{n}=\mathbf{x}_{n}\). We also assume that all of the observations of \(y_{n}\) have been collected into a vector \(\mathbf{y}=(y_{1},\ldots,y_{N})^{\top}\in\mathbb{R}^{N}\).
#### 2.2.1 Categorical covariates
The types of categorical data comprising \(X^{[c]}\) are usually either qualitative (nominal) data with no inherent ordering, such as type of motor vehicle, or ordinal data with an inherent ordering, such as bad-average-good driver. Different methods for pre-processing categorical data for use within machine learning methods have been developed, with one-hot encoding being a popular choice for traditional machine learning methods. Here, we focus on the categorical embedding technique (entity embedding) of Guo and Berkhahn (2016); see Richman (2021) and Delong and Kozak (2023) for a brief overview of other options and an introduction to embeddings. Assume that the \(t\)-th categorical variable, corresponding to column \(X^{[c_{t}]}\), for \(1\leq t\leq q_{c}\), can take one of \(K_{t}\geq 2\) values in the set of levels \(\{a_{1}^{t},\ldots,a_{K_{t}}^{t}\}\). An embedding layer for this categorical variable maps each member of the set to a low-dimensional vector representation of dimension \(b_{t}<K_{t}\), i.e.,
\[\mathbf{e}:\{a_{1}^{t},\ldots,a_{K_{t}}^{t}\}\rightarrow\mathbb{R}^{b_{t}},\qquad a _{k}^{t}\ \mapsto\ \mathbf{e}(a_{k}^{t})\stackrel{{\text{\tiny def}}}{{=}}\mathbf{e}^{t(k)}, \tag{2.3}\]
meaning to say that the \(k\)-th level \(a_{k}^{t}\) of the \(t\)-th categorical variable receives a low dimensional vector representation \(\mathbf{e}^{t(k)}\in\mathbb{R}^{b_{t}}\). When utilizing an embedding layer within a neural network, we calibrate the \(K_{t}b_{t}\) parameters of the embedding layer as part of fitting the weights \(W\) of the network \(\Psi_{W}(\mathbf{x})\). Practically, when inputting a categorical covariate to a neural network, each level of the covariate is mapped to a unique natural number, thus, we have represented these covariates as \(\mathbf{x}^{[c]}\in\mathbb{N}^{q_{c}}\), and these represenations are then embedded according (2.3) which can be interpreted as an additional layer of the network; this is graphically illustrated in Figure 7.9 of Wuthrich and Merz (2023).
#### 2.2.2 Numerical covariates
To enable the easy calibration of neural networks, numerical covariates must be scaled to be of similar magnitudes. In what follows, we will use the min-max normalization, where each raw numerical covariate \(\dot{X}^{[r_{t}]}\) is scaled to
\[X^{[r_{t}]}=\frac{\dot{X}^{[r_{t}]}-\min(\dot{X}^{[r_{t}]})}{\max(\dot{X}^{[r_ {t}]})-\min(\dot{X}^{[r_{t}]})},\]
where \(\dot{X}^{[r_{t}]}\) is \(t\)-th raw (i.e., unscaled) continuous covariate and the operation is performed for each element of column \(\dot{X}^{[r_{t}]}\) of the matrix of raw continuous covariates, \(\dot{X}^{[r]}\).
We note that the continuous data comprising \(\dot{X}^{[r]}\) can also easily be converted to ordinal data through binning, for example, by mapping each observed covariate in the data to one of the quantiles of that covariate. We do not consider this option for processing continuous variables in this work, however, we will use quantile binning to discretize the continuous covariates \(X^{[r]}\) for the purpose of estimating the ICEnets used here.
### Individual Conditional Expectations and Partial Dependence Plots
Machine learning interpretability methods are used to explain how a machine learning model, such as a neural network, has estimated the relationship between the covariates \(\mathbf{x}\) and the predictions \(\widehat{y}\); for an overview of these, see Biecek and Burzykowski (2021). Two related methods for interpreting machine learning models are the Partial Dependence Plot (PDP) of Friedman (2001) and the Individual Conditional Expectations (ICE) method of Goldstein et al. (2015).
The ICE method estimates how the predictions \(\widehat{y}_{n}\) for each instance \(1\leq n\leq N\) change as a single component of the covariates \(\mathbf{x}_{n}\) is varied over its observed range of possible values, while holding all of the other components of \(\mathbf{x}_{n}\) constant. The PDP method is simply the average over all of the individual ICE outputs of the instances \(1\leq n\leq N\) derived in the previous step. By inspecting the resultant ICE and PDP outputs, the relationship of the predictions with a single component of \(\mathbf{x}\) can be observed; by performing the same process for each component of \(\mathbf{x}\), the relationships with all of the covariates can be shown.
To estimate the ICE for a neural network, we now consider the creation of pseudo-data that will be used to create the ICE output of the network. We consider one column of \(X\), denoted as \(X^{[j]}\), \(1\leq j\leq q_{r}+q_{c}\). If the selected column \(X^{[j]}\) is categorical, then, as above, the range of values which can be taken by each element of \(X^{[j]}\) are simply the values in the set of levels \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\).
If the selected column \(X^{[j]}\) is continuous, then we assume that a range of values for the column has been selected using quantile binning or another procedure, and we denote the set of these values using the same notation, i.e., \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\), where \(K_{j}\in\mathbb{N}\) is the number of selected values \(a_{u}^{j}\in\mathbb{R}\) and where we typically assume that the continuous variables in \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\) are ordered in increasing order.
We define \(\tilde{X}^{[j]}(u)\) as a copy of \(X\), where all of the components of \(X\) remain the same, except for the \(j\)-th column of \(X\), which is set to the \(u\)-th value of the set \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\), \(1\leq u\leq K_{j}\). By sequentially creating predictions using a (calibrated) neural network applied to the pseudo-data, \(\Psi_{W}(\tilde{X}^{[j]}(u))\) (where the network is applied in a row-wise manner), we are able to derive the ICE outputs for each variable of interest, as we vary the value of \(1\leq u\leq K_{j}\). In particular, by allowing \(a_{u}^{j}\) to take each value in the set \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\), for each instance \(1\leq n\leq N\) separately, we define the ICE for covariate \(j\) as a vector of predictions on the artificial data, \(\widetilde{\mathbf{y}}_{n}^{[j]}\), as
\[\widetilde{\mathbf{y}}_{n}^{[j]}=\left[\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(1))v_{n},\ldots,\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(K_{j}))v_{n}\right], \tag{2.4}\]
where \(\tilde{\mathbf{x}}_{n}^{[j]}(u)\) represents the vector of covariates for instance \(n\), where the \(j\)-th entry has been set equal to the \(u\)-th value \(a_{u}^{j}\) of that covariate, which is contained in the appropriate set \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\).
The PDP can then be derived from (2.4) by averaging the ICE outputs over all instances. We set
\[\widehat{\mathbb{E}}[\widetilde{\mathbf{y}}_{n}^{[j]}]=\left[\frac{\sum_{n=1}^{N} \Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(1))v_{n}}{N},\ldots,\frac{\sum_{n=1}^{N} \Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(K_{j}))v_{n}}{N}\right]. \tag{2.5}\]
This can be interpreted as an empirical average, averaging over all instances \(1\leq n\leq N\).
## 3 ICEnet
### Description of the ICEnet
We start with a colloquial description of the ICEnet proposal before defining it rigorously. The main idea is to augment the observed data \((y_{n},\mathbf{x}_{n},v_{n})_{n=1}^{N}\) with pseudo-data, so that output equivalent to that used for the ICE interpretability technique (2.4) is produced by the network, for each variable that requires a smoothness or monotonicity constraint. This is done be creating pseudo-data that varies for each possible value of the variables to be constrained. For continuous variables, we use quantiles of the observed values of the continuous variables to produce the ICE
output while, for categorical variables, we use each level of the categories. The same neural network is then applied to the observed data, as well as each of the pseudo-data. Applying the network to the actual observed data produces a prediction that can be used for pricing, whereas applying the network to each of the pseudo-data produces outputs which vary with each of the covariates which need smoothing or require monotonicity to be enforced. The parameters of the network are trained (or fine-tuned) using a compound loss function: the first component of the loss function measures how well the network predicts the observations \(y_{n}\), \(1\leq n\leq N\). The other components ensure that the desired constraints are enforced on the ICEs for the constrained variables. After training, the progression of predictions of the neural network will be smooth or monotonically increasing with changes in the constrained variables. A diagram of the ICEnet is shown in Figure 1.
### Definition of the ICEnet
To define the ICEnet, we assume that some of the variables comprising the covariates \(X\in\mathbb{R}^{N\times(q_{r}+q_{c})}\) have been selected as requiring smoothness and monotonicity constraints. We collect all those variables requiring smoothness constrains into a set \(\mathcal{S}\subset\{1,\ldots,q_{r}+q_{c}\}\) with \(S\) members of the set and those variables requiring monotonicity constraints into another set \(\mathcal{M}\subset\{1,\ldots,q_{r}+q_{c}\}\), with \(M\) members of the set. As we have discussed above, we will rely on FCNs with appropriate pre-processing of the input data \(\mathbf{x}\) for predicting the response \(y\) with \(\widehat{y}\). For each instance \(n\in\{1,\ldots,N\}\), the ICEnet is comprised of two main parts. The first of these is simply a prediction \(\widehat{y}_{n}=\Psi_{W}(\mathbf{x}_{n})v_{n}\) of the outcome \(y_{n}\) based on covariates \(\mathbf{x}_{n}\). For the second part of the ICEnet, we now will create the pseudo-data using the definitions from Section 2.3 that will be used to create the ICE output of the network for each of the columns requiring constraints. Finally, we assume that the ICEnet will be trained with a compound loss function \(L\), that balances the good predictive performance of the predictions \(\widehat{y}_{n}\) together with satisfying constraints to enforce smoothness and monotonicity.
The compound loss function \(L\) of the ICEnet consists of three summands, i.e., \(L=L_{1}+L_{2}+L_{3}\). The first of these is set equal to the deviance loss function \(L^{D}\) that is relevant for the considered regression problem, i.e., \(L_{1}\stackrel{{\mbox{\tiny{set}}}}{{=}}L^{D}(y_{n},\widehat{y} _{n})\).
The second of these losses is a smoothing constraint applied to each covariate in the set \(\mathcal{S}\). For smoothing, actuaries have often used the Whittaker-Henderson smoother (Whittaker, 1922; Henderson, 1924), which we implement here as the square of the third difference of the predictions \(\widetilde{\mathbf{y}}_{n}^{[j]}\) given in (2.4). We define the difference operator of order 1 as
\[\Delta^{1}(\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(u)))=\Psi_{W}(\tilde{\mathbf{x}}_{n} ^{[j]}(u))-\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(u-1)),\]
and the difference operator of a higher order \(\tau\geq 1\) recursively as
\[\Delta^{\tau}(\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[j]}(u)))=\Delta^{\tau-1}(\Psi_{W} (\tilde{\mathbf{x}}_{n}^{[j]}(u)))-\Delta^{\tau-1}(\Psi_{W}(\tilde{\mathbf{x}}_{n}^{[ j]}(u-1))).\]
Thus, the smoothing loss \(L_{2}\) for order 3 is defined as
\[L_{2}(n)\,\stackrel{{\mbox{\tiny{def}}}}{{=}}\sum_{j\in\mathcal{ S}}\sum_{u=4}^{K_{j}}\lambda_{s_{j}}\left[\Delta^{3}(\Psi_{W}(\tilde{\mathbf{x}}_{n}^ {[j]}(u)))\right]^{2}, \tag{3.1}\]
where \(\lambda_{s_{j}}\geq 0\) is the penalty parameter for the smoothing loss for the \(j\)-th member of the set \(\mathcal{S}\).
Finally, to enforce monotonicity, we add the absolute value of the negative components of the first difference of \(\widetilde{\mathbf{y}}_{n}^{[j]}\) to the loss, i.e., we define \(L_{3}\) as:
\[L_{3}(n)\,\stackrel{{\text{\tiny def}}}{{=}}\,\sum_{j\in\mathcal{M} }\sum_{u=2}^{K_{j}}\lambda_{m_{j}}\max\left[\delta_{j}\Delta^{1}(\Psi_{W}( \tilde{\mathbf{x}}_{n}^{[j]}(u))),0\right], \tag{3.2}\]
where \(\lambda_{m_{j}}\geq 0\) is the penalty parameter for the smoothing loss for the \(j\)-th member of the set
Figure 1: Diagram explaining the ICEnet. The same neural network \(\Psi_{W}\) is used to produce both the predictions from the model, as well as to create predictions based on pseudo-data. These latter predictions are constrained, ensuring that the outputs of the ICEnet vary smoothly or monotonically with changes in the input variables \(\mathbf{x}\). In this graph, we are varying variable \(x_{1}\) to produce the ICEnet outputs which are \(\Psi_{W}(\tilde{\mathbf{x}}^{[1]}(\cdot))\).
\(\mathcal{M}\), and where \(\delta_{j}=\pm 1\) depending on whether we want to have a monotone increase \((-1)\) or decrease \((+1)\) in the \(j\)-the variable of \(\mathcal{S}\).
**Assumptions 3.1** (ICEnet architecture): _Assume we have independent responses and covariates \((y_{n},\mathbf{x}_{n}^{[r]},\mathbf{x}_{n}^{[c]},v_{n})_{n=1}^{N}\) as defined in Section 2.1, and we have selected some covariates requiring constraints into sets \(\mathcal{S}\) and \(\mathcal{M}\). Assume we have a neural network architecture \(\Psi_{W}\) as defined in (2.2) having network weights \(W\). For the prediction part of the ICEnet, we use the following mapping provided by the network_
\[\mathbf{x}_{n}\ \mapsto\ \widehat{y}_{n}=\Psi_{W}(\mathbf{x}_{n})v_{n},\]
_which produces the predictions required for the regression task. In addition, the ICEnet also produces the following predictions made with the same network on pseudo-data defined by_
\[\tilde{\mathbf{x}}_{n}\ \mapsto\ \left[\widetilde{\mathbf{y}}_{n}^{[s_{1}]},\ldots, \widetilde{\mathbf{y}}_{n}^{[s_{S}]},\widetilde{\mathbf{y}}_{n}^{[m_{1}]},\ldots, \widetilde{\mathbf{y}}_{n}^{[m_{M}]}\right], \tag{3.3}\]
_where all definitions are the same as those defined in Section 2.3 for \(s_{l}\in\mathcal{S}\) and \(m_{l}\in\mathcal{M}\), respectively. Finally, to train the ICEnet, we assume that a compound loss function, applied to each observation \(n\in\{1,\ldots,N\}\) individually is specified as follows_
\[L(n)\ \overset{\text{\tiny def}}{=}\ L^{D}(y_{n},\widehat{y}_{n})+L_{2}(n)+L_{3 }(n), \tag{3.4}\]
_for smoothing loss \(L_{2}(n)\) and montonicity loss \(L_{3}(n)\) given in (3.1) and (3.2), respectively, for non-negative penalty parameters collected into a vector \(\lambda=(\lambda_{s_{1}},\ldots,\lambda_{s_{S}},\lambda_{m_{1}},\ldots,\lambda _{s_{M}})^{\top}\)._
We briefly remark on the ICEnet architecture given in Assumptions 3.1.
**Remark 3.2**:
* To produce the ICE outputs from the network under Assumptions 3.1, we apply the same network \(\Psi_{W}(\cdot)\) multiple times to pseudo-data that have been modified to vary for each value that a particular variable can take, see (3.3). Applying the same network multiple times is called a point-wise neural network in Vaswani et al. (2017), and it is called a time-distributed network in the Keras package; see Chollet et al. (2017). This application of the same network multiple times is also called a one-dimensional convolutional neural network.
* Common actuarial practice when fitting general insurance pricing models is to consider PDPs to understand the structure of the fitted coefficients of a GLM and smooth these manually. Since we cannot smooth the neural network parameters \(W\) directly, we have rather enforced constraints (3.1) for the ICE produced by the network \(\Psi_{W}(\cdot)\); this in particular applies if we smooth ordered categorical variables.
* Enforcing the same constraints as those in (3.4) in a GLM will automatically smooth/constrain the GLM coefficients similar to LASSO and fused LASSO regularization; we refer to Hastie et al. (2015). We also mention that similar ideas have been used in enforcing monotonicity in multiple quantile estimation; see Kellner et al. (2022).
* Since the PDP of a model is nothing more than the average over the ICE outputs of the model for each instance \(n\), by enforcing the constraints for each instance also enforces the constraints on the PDP. Moreover, the constraints in (3.4) could be applied in a slightly different manner, by first estimating a PDP using (2.5) for all of the observations in a batch, then applying the constraints to the estimated PDP.
* We have used a relatively simple neural network within the ICEnet; of course, more complex network architectures could be used.
## 4 Applying the ICEnet
### Introduction and exploratory analysis
In this section, we apply the ICEnet to the French Motor Third Party Liability (MTPL) dataset included in the CASdatasets package in the R language (Dutang and Charpentier, 2020), with the aim of predicting claims frequency. We follow the same data pre-processing, error correction steps and splitting of the data into learning and testing sets (in a 9:1 ratio), as those described in Appendix B of Wuthrich and Merz (2023), to which we also refer for an exploratory data analysis of this dataset. Table 1 shows the volumes of data in each of the learning and testing sets, illustrating that the observed claims rate is quite similar between the two sets.
The MTPL data contain both unordered and ordinal categorical covariates, as well as continuous covariates. To apply the ICEnet, we select all of the ordinal categorical and continuous covariates in the dataset for constraints; these are the Bonus-Malus Level, Density, Driver Age, Vehicle Age and Vehicle Power fields. Figure 2 shows the empirical claims frequency in the learning set, considering each fields in turn, i.e., this is a univariate (marginal) analysis. Note that for the continuous density variable, we are showing the frequency and exposure calculated at the percentiles of this variable. It can be seen that the empirical frequency varies with each variable somewhat erratically, especially in those parts of the domain of the variable when there is small exposure. In particular, for the bonus-malus level and density variables, there is quite a significant vertical spread of empirical frequencies, indicating that the univariate analysis presented here may not suitably capture the relationships between the covariates and the response, i.e., we need a more complex model to make predictions on this data.
In what follows, we impose the following constraints on these variables. Smoothing constraints are applied to all five of the variables. Among these five smoothed variables, ensuring that smoothness is maintained for driver age, vehicle age and bonus-malus level will be the most important constraints needed in this model from a commercial perspective, since the values of these variables will likely be updated each year that a policy is in force. Since we expect that claims frequency will increase with increasing bonus-malus scores, density and vehicle power, we also apply constraints to ensure that the model predicts monotonically increasing claims frequency for each of these variables. Table 2 shows these penalty parameters for each variable. These constraint values were selected heuristically by experimenting with different values until acceptably smooth and monotonic ICE outputs were produced, however, the impact on predictive performance was not considered when selecting these. Note that the table includes a
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Set & \(N\) & Exposure & Claims & Frequency \\ \hline learn & \(610,206\) & \(322,392\) & \(23,737\) & \(0.0736\) \\ test & \(67,801\) & \(35,967\) & \(2,644\) & \(0.0735\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of records, total exposure and observed claims frequency in the French MTPL dataset, split into learning and testing sets
direction column to show that we are enforcing monotonically increasing constraints to \(\delta_{j}=-1\); setting the direction parameter to \(\delta_{j}=1\) would produce monotonically decreasing constraints (see Listing 1 in Appendix A). Of course, these parameters could also be selected to produce optimal predictive performance using, for example, an out-of-sample validation set, or \(K\)-fold cross-validation.
### Fitting the ICEnet
We select a \(d=3\) layer neural network for the FCN component of the ICEnet, with the following layer dimensions (\(q_{1}=32,q_{2}=16,q_{3}=8\)) and set the activation function \(\phi(\cdot)\) to the Rectified Linear Unit (ReLU) function. The link function \(g(\cdot)\) was chosen to be the exponential function
Figure 2: Empirical claims frequency (top panel) and observed exposures (bottom panel) in the French MTPL dataset for each of the Bonus-Malus Level, Density, Driver Age, Vehicle Age and Vehicle Power covariates (univariate analysis only), learning set only. Note that the \(y\)-scales for each variable are not comparable.
(which is the canonical link of the Poisson model). To perform early-stopping to regularize the network, we further split the learning set into a new learning set \(\mathcal{L}\), containing 95% of the original learning set, and a small validation set \(\mathcal{V}\), containing 5% of the original learning set. We assign an embedding layer to each of the unordered categorical variables (Vehicle Gas, Vehicle Brand, Region and Area Code), and, for simplicity, select the dimension of the real-valued embedding to be \(b=5\) for each of these variables. For the rest of the variables, we apply mix-max scaling (see Section 2.2) and then directly input these into the FCN; this explains how the vector of covariates \(\mathbf{x}\) has been constructed.
For comparison with the ICEnet, we begin with training several simple FCNs for comparison; 10 training runs of these were performed. This was fit using the Adam optimizer for 50 epochs, using a learning rate of 0.001 and a batch size of 1024 observations. The network training was early stopped by selecting the epoch with the best out-of-sample score on the validation set \(\mathcal{V}\). To train this FCN network, we use only the deviance loss \(L^{D}\), i.e., the first component of the loss function (3.4). Since neural network training involves some randomness arising from the random initialization of the network weights \(W\) and the random selection of training data to comprise each mini-batch, the results of each training run will vary in terms of predictive performance; furthermore the exact relationships learned between the covariates and the predictions will vary by training run. In the first two lines of Table 3, we show the average of the Poisson deviance loss and the Poisson deviance loss produced using the nagging predictor of Richman and Wuthrich (2020), which is the predictor produced by averaging over the outputs of several neural network training runs. The standard deviation of the Poisson deviance loss across training runs is shown in the table in parentheses.
The ICEnet was fit using the Keras in the R programming language;1 for the Keras package see Chollet et al. (2017). The implementation of the ICEnet is explained in more detail in Appendix A in the supplementary material. Similar to the FCNs, 10 training runs of the ICEnet were performed. Since the ICEnet requires many outputs to be produced for each input to the network, the computational burden of fitting this model is quite heavy, nonetheless, this can be done easily using a Graphics Processing Unit (GPU) and the relevant GPU optimized versions of deep learning software. The ICEnet results are on Lines 3-4 of Table 3, showing the average results and the nagging predictor, respectively. We note that training runs of the FCN and ICEnet take about 3 and 12 minutes to complete, respectively, the former running without a GPU and the latter running on a GPU.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Covariate & Smoothing Constraint & Monotonicity Constraint & Direction \\ \hline Driver Age & 10 & 0 & \(-1\) \\ Vehicle Age & 1 & 0 & \(-1\) \\ Bonus Malus & 1 & 100 & \(-1\) \\ Density & 1 & 100 & \(-1\) \\ Vehicle Power & 1 & 100 & \(-1\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Smoothing and monotonicity constraints applied within the ICEnet
It can be observed that adding smoothing and monotonicity constraints to the ICEnet has resulted in only marginally worse performance of the model on both the training and testing set, compared to the FCN results. Interestingly, whereas the FCN has a relatively large performance gap between the training and testing sets (both on average and when comparing the nagging predictor), the ICEnet has a much smaller gap, implying that these constraints are regularizing the network, at least to some extent. Finally, while the nagging predictor improves the performance of both the FCN and the ICEnet, the performance improvement is somewhat larger for the former than the latter, which can be explained by the nagging predictor performing regularization over the different fitting runs.
We approximately decompose the reduction in performance between the smoothness constraints on the one hand, and the monotonicity constraints on the other, by refitting the ICEnet with the smoothing constraints only, and then with the monotonicity constraints only. These results are shown in Table 4.
The results show that the smoothing constraints are the primary cause of the decline in performance of the ICEnet, compared with the FCN. The monotonicity constraints, on the other hand, improve the out-of-sample performance of the ICEnet to surpass the FCN, whether on average, or when using the nagging predictor. Furthermore, the small gap between the training and testing results for the ICEnet with monotonicity constraints only, shows that these successfully regularize the model. These results suggest that adding monotonicity constraints to actuarial deep learning models based on expert knowledge can lead to enhanced performance. Whether the somewhat decreased performance of a model that produces smoothed outputs is acceptable, will depend on whether these constraints are necessary for commercial purposes. Moreover, the
\begin{table}
\begin{tabular}{l l|l l|l l} \hline \hline Description & Constraints & Learn & \multicolumn{2}{c}{Test} \\ \hline \hline ICEnet & Smoothing + Monotonicity & 0.2386 & (0.000180) & 0.2388 & (0.000236) \\ ICEnet & Smoothing & 0.2385 & (0.000423) & 0.2388 & (0.000385) \\ ICEnet & Montonicity & 0.2384 & (0.000214) & 0.2385 & (0.000140) \\ \hline ICEnet (nagging) & Smoothing + Monotonicity & 0.2384 & & 0.2385 & \\ ICEnet (nagging) & Smoothing & 0.2382 & & 0.2385 & \\ ICEnet (nagging) & Monotonicity & 0.2381 & & 0.2382 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Poisson deviance loss for the ICEnet, learning and testing sets. For multiple runs, the average and standard deviation (in parentheses) are reported. Constraints are varied as described in the relevant column.
\begin{table}
\begin{tabular}{l|l l|l l} \hline \hline Description & Learn & \multicolumn{2}{c}{Test} \\ \hline FCN & 0.2381 & (0.000211) & 0.2387 & (0.000351) \\ FCN (nagging) & 0.2376 & & 0.2383 & \\ \hline ICEnet & 0.2386 & (0.000180) & 0.2388 & (0.000236) \\ ICEnet (nagging) & 0.2384 & & 0.2385 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Poisson deviance loss for the FCN and the ICEnet, learning and testing losses. For multiple runs, the average and standard deviation (in parentheses) are reported.
values of the constraint parameters could be (further) varied so that an acceptable trade-off of smoothness and performance is achieved; for an example of this, see Section 4.4, below.
### Exploring the ICEnet predictions
We start with considering the global impact of applying constraints within the ICEnet by considering the PDPs derived from the FCN and ICEnet models in each of the 10 training runs of these models. The ICE outputs were constrained when fitting the network by using loss function (3.4) applied to each of the observations in the training set. The PDPs are shown in Figure 3 with blue lines for FCN and red lines for ICEnet.
Figure 3: PDPs for each of the Bonus-Malus Level, Density, Driver Age, Vehicle Age and Vehicle Power fields shown in separate panels, test set only. Blue lines are PDPs from the FCNs (unsmoothed) and red lines are PDPs from the ICEnet (smoothed). Bold lines relate to the PDPs from the first of 10 runs; the lighter lines relate to the remaining runs. Note that the scale of the \(y\)-axis varies between each panel.
The PDPs from the unconstrained FCNs models exhibit several undesirable characteristics: for the bonus-malus level, density and vehicle age covariates, these exhibit significant roughness which would translate into unacceptable changes in rates charged to policyholders as, for example, bonus-malus scores are updated, or as the policyholder's vehicle ages. Also, in some parts of the PDPs, it can be observed that monotonicity has not been maintained by the FCNs. Finally, the PDPs for the driver age and vehicle power covariates exhibit different shapes over the different runs, meaning that the FCNs have learned different relationships between the covariates and the predictions. On the other hand, the PDPs from the ICEnets are significantly smoother for all of the variables and runs, appear to be monotonically increasing in all cases and finally, exhibit more similar shapes across the different runs. Interestingly, the relationships learned by the constrained ICEnets are quite different for the bonus-malus level, vehicle age and driver age variables than those learned by the FCN, with the PDPs of the ICEnet for the first two variables arguably more commercially reasonable than the those from the FCN.
Focusing on the first training run of the FCNs and the ICEnets, we now show some examples of the effect of the constraints on individual observations. To estimate the monotonicity and smoothness of the ICE outputs of the FCN and ICEnets, these components of the ICEnet loss function (3.4) were evaluated for each observation in the test set. Figure 4 shows density plots of the difference (smoothed model minus unsmoothed model) between these scores for each of these models, for each variable separately.
All of these densities have a long left tail, meaning to say, that the ICE outputs produced by the ICEnet are significantly more monotonic and smooth than those produced by the FCN. Also, the densities usually peak around zero, meaning to say that adding the constraints has generally not significantly "damaged" outputs from the FCN that already satisfied the monotonicity or smoothness constraints. For the densities of the monotonicity scores for the bonus-malus level and density variables, it can be seen that there is a short right tail as well, which we have observed to occur when the original FCN model produces some outputs that are already monotonic and these are altered somewhat by the ICEnet.
Figures 5 and 6 show ICE plots for several instances \(n\), which have been specially chosen as those that are the least monotonic and smooth ones, based on the monotonicity and smoothness scores evaluated for each observation in the test set on the outputs of the FCN only. These figures show that, in general, the ICE plots of predictions from the ICEnet are more reasonable. For example, for instance \(n=4282\) in Figure 5, it can be seen that the FCN produces predictions that decrease with density, whereas the opposite trend is produced by the ICEnet; another example is instance \(n=41516\) where the FCN predictions decrease with vehicle power and the ICEnet reverses this. A nice side effect of the constraints is that some of the exceptionally large predictions produced by the FCN, for example, for instance \(n=16760\) in Figure 6, are reduced significantly by the ICEnet; in general it would be highly unlikely to underwrite policies with such an elevated frequency of claim.
done by fitting exactly the same FCN as before, with the same covariates, but with the aim of approximating the nagging predictor derived using the 10 FCN predictions. Then, the weights of the meta-network were used as a starting point for fitting an ICEnet, where the strength of the constraints was varied over a wide range from \(10^{-5}\) to \(10^{5}\). The results of fitting these models, as well as the PDPs from the meta-model and the ICEnets are shown in Table 5 and Figure 7, respectively. The minimum value of the test set Poisson deviance loss is reached when setting the value of the constraints equal to \(10^{-3}\), however, the validation set minimum is reached when setting the value of the constraints to unity; in this case the validation set identifies an almost optimal value of the constraints as measured by the test set performance. It can also be seen that the highest value of the constraints leads to models with a poor test and validation set performance. The PDPs show that the ICEnet with the optimal value of the constraints maintains many of the features of the PDPs of the meta-model, while being smoother and less
Figure 4: Density plots of the difference between the monotonicity and smoothness components of the ICEnet loss function (3.4) evaluated for each observation in the test set.
extreme than the meta-model. The ICEnets with the most extreme values of the constraint parameters become much flatter for the bonus-malus level, vehicle age and density variables, whereas the PDPs for driver age and vehicle power maintain significant variation over the range of these variables.
Here, we have considered quite simple variations of the constraint parameters by varying the constraints for all variables together; using other techniques, one could also attempt to find an optimal set of constraints where the values of these differed by variables and type of constraint.
Figure 5: ICE plots of the output of the FCN and the ICEnet for instances \(n\) chosen to be the least monotonic based on the monotonicity score evaluated for each instance in the test set on the outputs of the FCN. Note that the smoothed model is the ICEnet and unsmoothed model is the FCN.
## 5 Local approximations to the ICEnet
In the previous section, the constraints on the network have been enforced by creating ICE outputs from the FCN for each value that the covariates with constraints can take, see Assumptions 3.1. However, this creates quite some computational overhead when fitting the ICEnet. Thus, we now explore a local approximation to the ICEnet, where the ICE outputs from the FCN are created only for a small subset of the values that the covariates with constraints can take. In particular, we define a window size parameter \(\omega\) and only produce the ICE outputs in a window of \(\frac{\omega-1}{2}\) around the actual value of each covariate. To ensure that the third difference used for smoothing remains defined, we set the window parameter to have a value of \(\omega=5\) in what follows. To define the Local ICEnet approximation, all that is needed is to modify the ICEnet
Figure 6: ICE plots of the output of the FCN and the ICEnet for instances \(n\) chosen to be the least smooth based on the smoothness score evaluated for each instance in the test set on the outputs of the FCN. Note that the smoothed model is the ICEnet and unsmoothed model is the FCN.
Figure 7: PDPs of the meta-model and ICEnets with the constraint parameters varied from \(10^{-5}\) to \(10^{5}\). The PDPs of the meta-model are in black and the PDP of the optimal model - with smoothing parameters set equal to 1 - based on the validation set is shown in red.
to produce outputs for an observation to return outputs only in the window around the actual observed value. Furthermore, for instances that are at the extremes of the observed value of \(j\)-th covariate, e.g., for the youngest drivers, we produce outputs from the ICEnet for only the first or last \(\omega\) observations of the set \(\{a_{1}^{j},\ldots,a_{K_{j}}^{j}\}\). Since the Local ICEnet needs to produce significantly fewer outputs than the global ICEnet presented earlier, the computational burden of fitting this model is dramatically decreased and the model can be fit without a GPU.
A Local ICEnet was fit to the same data as above. To enforce the constraints more strongly over the range of the constrained variables, slightly stronger values for these were found to work well; these are shown in Table 7. A Local ICEnet takes about 12 minutes to run without a GPU; this is similar to the global ICEnet running on a GPU.
The results of fitting the Local ICEnet are shown in Table 6. In this case, the ICEnet outperforms the FCN, both on average and for the nagging predictor and, comparing these results to those in Table 3, it appears that the Local ICEnet allows constraints to be enforced with a smaller loss of predictive performance than the global ICEnet shown above.
We show plots of the outputs of the Local ICEnet in Appendix B of the supplementary material. From Figure 8 it can be seen that the global effect of applying the Local ICEnet is quite similar
\begin{table}
\begin{tabular}{l|l|l l} \hline \hline Description & Learn & \multicolumn{2}{c}{Test} \\ \hline FCN & 0.2381 & (0.000211) & 0.2387 & (0.000351) \\ FCN (nagging) & 0.2376 & & 0.2383 & \\ \hline ICEnet & 0.2385 & (0.000301) & 0.2386 & (0.000215) \\ ICEnet (nagging) & 0.2381 & & 0.2383 & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Poisson deviance loss for an FCN and the Local ICEnet, learning and testing sets. For multiple runs, the average and standard deviation (in parentheses) are reported.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Regularization Parameter (\(\log_{10}\)) & Learn & Validation & Test \\ \hline meta-model & 0.23835 & 0.23232 & 0.23829 \\ -5 & 0.23838 & 0.23276 & 0.23856 \\ -4 & 0.23833 & 0.23245 & 0.23847 \\ -3 & 0.23791 & 0.23243 & 0.23837 \\ -2 & 0.23831 & 0.23263 & 0.23855 \\ -1 & 0.23816 & 0.23241 & 0.23856 \\ **0** & **0.23785** & **0.23236** & **0.23840** \\ 1 & 0.23924 & 0.23332 & 0.23947 \\ 2 & 0.24093 & 0.23484 & 0.24099 \\ 3 & 0.24341 & 0.23684 & 0.24375 \\ 4 & 0.24659 & 0.23965 & 0.24744 \\ 5 & 0.24903 & 0.24212 & 0.25018 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Poisson deviance loss for the ICEnet, learning, validation and testing sets. The magnitude of the constraints is varied as described in the first column; “meta-model” denotes the results of the meta-model with no constraints applied.
to that of the global ICEnet, however, the PDPs shown in this figure are a little less smooth than those of the global ICEnet. In Figure 9 it can be seen that the Local ICEnet approximation has been moderately successful at enforcing the required constraints with most of the variables showing improved monotonicity and smoothness scores, except for the bonus-malus level and density covariates. The examples shown in Figures 10 and 11 illustrate that for the most extreme examples of the non-monotonic and rough results produced by the FCN, the Local ICEnet appears to produce outputs that are similar to the global ICEnet.
## 6 Conclusions
We have presented a novel approach, the ICEnet, to constrain neural network predictions to be monotonic and smooth using the ICE model interpretability technique. We have shown how the global ICEnet can be approximated using a less computationally intensive version, the Local ICEnet, that enforces constraints on locally perturbed covariates. Fitting these models to a real-world open-source dataset has shown that the monotonicity constraints enforced when fitting the ICEnet can improve the out-of-sample performance of actuarial models, while fitting models with a combination of smoothing and monotonic constraints allows the models to produce predicted frequencies of claim that accord with intuition and are commercially reasonable. In this work, we have focused on smoothing the predictions of networks with respect to individual observations; as we have noted above it is easy to rather impose constraints on the global behaviour of networks instead. Moreover, other formulations of smoothing constraints could be imposed, for example, instead of squaring the third differences of model outputs, which will lead to minimizing the larger deviations from smoothness, rather absolute differences could be smoothed.
|
2302.08678 | Multi-Behavior Graph Neural Networks for Recommender System | Recommender systems have been demonstrated to be effective to meet user's
personalized interests for many online services (e.g., E-commerce and online
advertising platforms). Recent years have witnessed the emerging success of
many deep learning-based recommendation models for augmenting collaborative
filtering architectures with various neural network architectures, such as
multi-layer perceptron and autoencoder. However, the majority of them model the
user-item relationship with single type of interaction, while overlooking the
diversity of user behaviors on interacting with items, which can be click,
add-to-cart, tag-as-favorite and purchase. Such various types of interaction
behaviors have great potential in providing rich information for understanding
the user preferences. In this paper, we pay special attention on user-item
relationships with the exploration of multi-typed user behaviors. Technically,
we contribute a new multi-behavior graph neural network (MBRec), which
specially accounts for diverse interaction patterns as well as the underlying
cross-type behavior inter-dependencies. In the MBRec framework, we develop a
graph-structured learning framework to perform expressive modeling of
high-order connectivity in behavior-aware user-item interaction graph. After
that, a mutual relation encoder is proposed to adaptively uncover complex
relational structures and make aggregations across layer-specific behavior
representations. Through comprehensive evaluation on real-world datasets, the
advantages of our MBRec method have been validated under different experimental
settings. Further analysis verifies the positive effects of incorporating the
multi-behavioral context into the recommendation paradigm. Additionally, the
conducted case studies offer insights into the interpretability of user
multi-behavior representations. | Lianghao Xia, Chao Huang, Yong Xu, Peng Dai, Liefeng Bo | 2023-02-17T03:59:07Z | http://arxiv.org/abs/2302.08678v1 | # Multi-Behavior Graph Neural Networks for Recommender System
###### Abstract
Recommender systems have been demonstrated to be effective to meet user's personalized interests for many online services (_e.g._, E-commerce and online advertising platforms). Recent years have witnessed the emerging success of many deep learning-based recommendation models for augmenting collaborative filtering architectures with various neural network architectures, such as multi-layer perceptron and autoencoder. However, the majority of them model the user-item relationship with single type of interaction, while overlooking the diversity of user behaviors on interacting with items, which can be click, add-to-cart, tag-as-favorite and purchase. Such various types of interaction behaviors have great potential in providing rich information for understanding the user preferences. In this paper, we pay special attention on user-item relationships with the exploration of multi-typed user behaviors. Technically, we contribute a new multi-behavior graph neural network (MBRec), which specially accounts for diverse interaction patterns as well as the underlying cross-type behavior inter-dependencies. In the MBRec framework, we develop a graph-structured learning framework to perform expressive modeling of high-order connectivity in behavior-aware user-item interaction graph. After that, a mutual relation encoder is proposed to adaptively uncover complex relational structures and make aggregations across layer-specific behavior representations. Through comprehensive evaluation on real-world datasets, the advantages of our MBRec method have been validated under different experimental settings. Further analysis verifies the positive effects of incorporating the multi-behavioral context into the recommendation paradigm. Additionally, the conducted case studies offer insights into the interpretability of user multi-behavior representations. We release our model implementation at [https://github.com/akakih/MBRec](https://github.com/akakih/MBRec).
Graph Neural Network, Recommender System, Collaborative Filtering, Multi-Behavior Recommendation
## I Introduction
With the growth of Internet services and mobile applications, recommender systems have played an increasingly critical role in addressing the information overload for many online platforms [56, 10, 49]. For example, the benefits of recommendation systems could lie in providing personalized recommendations in e-commerce sites (_e.g._, Amazon and Taobao), or satisfying users' interest in online music streaming services (_e.g._, Pandora and Spotify). Currently, collaborative filtering techniques serve as one of the most important paradigms to accurately understand the preferences of users, based on their interaction behaviors [57, 28].
With the remarkable success of deep learning, there exist renewed interests in modeling user-item interactions with various neural network architectures, such as multi-layer perceptron [11, 26], autoencoder network [25] and neural autoregressive models [58]. Built on the recent strength of graph neural networks, several studies seek to aggregate feature information from the graph-structured relational data generated by the observed user behaviors [34, 53]. These neural network models generally focus on single type of user interaction behaviors over items, during the vectorized representation procedure of users and items. However, in real-life applications, items are often interacted by users with diverse ways [7, 54, 51]. For example, users can view, tag-as-favourite and purchase different products in E-commerce platforms. In such real-life scenarios, effectively modeling of multi-typed user-item interactions can provide auxiliary knowledge to characterize the diverse user behavior semantics for interest representation in recommender systems [9, 36]. For simplifying the model design, the embedding functions in most existing recommendation models nearly ignore the explicit encoding of multi-behavior collaborative signals, which are insufficient to yield satisfactory latent representations for both users and items. In this paper, we tackle the multi-behavior recommendation by enhancing user preference learning with the exploration of multi-typed user behavior data.
Although it is desirable to consider the behavior diversity in user interest representation learning for accurate recommendations, it is not a trivial task to capture the complex multi-behavioral collaborative relations. In particular, each type of user behaviors has its own interaction contexts and there exist complex dependencies across various types of interactions. Different behavior views usually provide complementary information for encoding user's interests. Therefore, in order to learn meaningful behavior representations from multi-typed user-item interactions, an effective cross-type behavior dependency modeling is a necessity in solving the multi-behavior recommendation. In addition, in the view of user-item interaction graph, the effectiveness of exploring subgraph structures has been shown in recently emerged graph-based methods (_e.g._, PinSage [50] and NGCF [34]), with the consideration of high-hop neighbors. However, to design effective embedding function in the recommendation architecture for representation aggregations, it is crucial to expressively model the high-order multi-behavior patterns of users over the interaction graph structure from different propagation layers. We show the multi
behavior recommendation scenario with the illustrated examples in Figure 1. We can observe that user can interact with items with different behavior types (differentiated with weight lines), _e.g_., click, add-to-cart and purchase. In such cases, we generate a multi-behavior interaction graph to represent the diverse collaborative relations among users and items with the high-order multiplex connectivity information.
**Present Work.** Motivated by the aforementioned challenges, this work proposes a new recommendation framework: Multi-Behavior Graph Neural Network (MBRec) that explicitly incorporates multi-typed behavior context into the encoding of diverse user preference. In the proposed MBRec framework, we can capture the heterogeneous relationships across different types of user-item interactions. Specifically, to cope with the cross-type behavior dependencies, we propose an attention-enhanced graph neural network to preserve the high-order interaction patterns over the multiplex graph structures. Through this way, MBRec is able to preserve the fine-grained semantics of user-item relations and facilitate the modeling of diverse user interest. To differentiate the influences of different types of behaviors, a gated aggregation mechanism is developed to help fuse the contextual signals from different types of behaviors for a better embedding learning. In addition, we endow our MBRec with the capability of aggregating high-order behavior representations in an adaptive way. To achieve this goal, a mutual relation encoder is proposed to learn summarized representations across different graph layers. This component allows our model to better capture and interpret the global property of user-item interaction graph.
Lastly, it is worth mentioning that although the multi-behavior information has been considered in recent studies [7, 8], these work only consider the multi-behavior dependencies with the predefined correlations, which can hardly comprehensively capture the complex cross-type behavior dependencies in real-world recommendation scenarios. In addition, another study of multi-behavior recommendation model proposes to consider the dependencies between different interactions based on multi-channel graph convolutional network [16]. Different from these approaches, we contribute a new recommendation framework to explicitly exploit the high-order collaborative signals in the form of multi-behavior patterns. To enhance the global multi-behavior relation learning, we design a new graph aggregation scheme with order-wise mutual dependency modeling, which automatically differentiates the importance of behavior-aware representations from different graph hops during the message passing process. Lack of considering the complex global multi-behavior patterns could easily lead to suboptimal representations for user preference modeling.
In summary, we highlight our contributions as follows:
* We introduce a new multi-behavior recommendation framework, which explores the high-order and cross-type behavior inter-dependencies in a hierarchical manner.
* We propose a new recommendation framework that inherits the merits of graph-enhanced collaborative filtering paradigm and designs a multi-behavior propagation strategy for heterogeneous user-item interactions.
* Then, a graph-structured mutual relation encoder is developed to explore high-order user preference and promote the collaboration of different layer-specific patterns for robust multi-behavior representations. Furthermore, a graph sampling algorithm is developed to improve the scalability of MBRec for dealing with large-scale graph data.
* Experimental results on three real-world datasets show the superiority of our MBRec model over a variety of baselines. We further perform the model ablation study to better understand the effect of our designed sub-modules.
In this paper, we propose to advance our previous work [43] from the following aspects: i) Different from our previous method which overlooks the cross-layer implicit dependency between behavior representations, we propose a new recommendation framework to explicitly promote the cooperation of behavior patterns across different graph layers, for more accurate user and item representations (Section III). ii) We provide a comprehensive complexity analysis and computational cost evaluation of our model to show that our proposed model could achieve competitive time efficiency as compared to most state-of-the-art techniques (Section III-E and Section IV-J). iii) We further evaluate the model performance with respect to different sparsity levels of user-item interaction data, to justify the robustness of our multi-behavior recommendation method in capturing user's preference under different data sparsity degrees (Section IV-H). iv) We add three new recently developed baselines (_i.e_., MBGCN, MATN and NGCF+M) in our performance evaluation to show the superiority of our method (Section IV). Additionally, we present more hyperparameter study results with respect to the hidden state dimensionality and behavior embedding channels (Section IV-I). v) We perform case studies to show the interpretation capability of our approach in capturing the behavior relationships as well as the layer-wise representation dependence (Section IV-K). vi) We adopt two new datasets (BeiBei and IJCAI-contest) collected from real-world e-commerce platforms for performance evaluation across different experimental settings. The user-item interactions in online retailing systems are multiplex in nature and can well reflect the relation heterogeneity between users and items. vii) Finally, we present detailed discussion about the related work from three research lines: neural collaborative filtering techniques, recommendation with multi-behavior modeling and graph neural networks (Section V).
Figure 1: Illustration of the user-item multi-behavior interactions and the corresponding multi-behavior high-order connectivity. Best viewed in color.
## II Preliminaries
We begin by describing the multi-behavior recommendation and introducing key notations. Suppose we have a recommendation scenario with a set of users \(U\) (\(u_{i}\in U\)) and a set of items \(V\) (\(v_{j}\in V\)). Here, the index of user and item is denoted by \(i\) (\(i\in[1,...,I]\)) and \(j\) (\(j\in[1,...,J]\)), respectively. Different from most existing recommender systems which associate users with their interacted items based on singular type of user-item relations, this work explores the inter-dependent relations across different types of user-item behaviors (_e.g_., click, tag-as-favorite, review, like, or purchase).
Definition 1. **Multi-Behavior Interaction Tensor X**. To represent the multi-typed interactions between users and items, we define a three-way multi-behavior interaction tensor \(\textbf{X}\in\mathbb{R}^{I\times J\times K}\), where \(K\) (indexed by \(k\)) denotes the number of types of user-item interactions. Given the \(k\)-th type of interactions, the corresponding element \(x_{i,j}^{k}\in\textbf{X}\) is set to 1 if the item \(v_{j}\) has been adopted by user \(u_{i}\). Otherwise, \(x_{i,j}^{k}=0\).
**Multi-Behavior Recommendation**. In the recommendation scenario with multi-type interaction behaviors, we first define the target behavior type (_e.g_., \(k\)-th) as our predictive objective and consider other types (\(k^{\prime}\in[1,...,K]\&k^{\prime}\neq k\)) of interaction as auxiliary behaviors. In real-world recommendation scenarios, the target behavior type can be set according to different task-specific requirements. For example, some e-commerce systems may be more interested in ultimate purchase transactions [37], while forecasting click-interactive behavior is also very important for online advertising platforms [24]. We formally present the multi-behavior recommendation as:
**Input**: multi-behavior interaction tensor \(\textbf{X}\in\mathbb{R}^{I\times J\times K}\) which jointly includes source and target behavior of users in \(U\).
**Output**: A predictive framework which effectively forecasts the unknown target type of user-item interactions.
## III Methodology
In this section, we first elaborate the technical details of our proposed MBRec framework. Its key idea is to explore the complex inter-dependencies across different types of users' interactive behavior, to parameterize weight matrices for the relation heterogeneity aggregation, high-order message passing and propagation modules. Such process can be decomposed into two key components: (i) Multi-Behavior Graph-Structured Dependency Modeling: it jointly preserves the type-specific behavior semantics and type-wise behavior inter-dependencies within a graph-structured learning architecture. (ii) Cross-Layer Mutual Relation Learning: it captures the mutual relationships between the aggregated multi-hop feature representations of neighbors at different hops.
### _Multi-Behavior Graph Dependency Modeling_
Figure 2 presents the model flow of our multi-behavior graph dependency modeling. With the consideration of different types of user-item behavioral relations, we first construct the multi-behavior user-item interaction graph.
Definition 2. **Multi-Behavior Graph**\(G\). Based on the input multi-behavior interaction tensor **X**, we generate a graph \(G=\{U,V,\mathcal{E}\}\) where user node \(u_{i}\in U\) and item node \(v_{j}\in V\) is connected with the edge \(e_{i,j,k}\in\mathcal{E}\), if \(u_{i}\) interacts with \(v_{j}\) under the \(k\)-th behavior type (_i.e_., \(x_{i,j}^{k}=1\)). Each edge \(e_{i,j,k}\in\mathcal{E}\) is associated with a specific behavior type of \(k\). Due to the interaction heterogeneity property, there exist multiplex edges between the same user-item pair given \(u_{i}\) interacts with \(v_{j}\) under multiple behavior types.
#### Iii-A1 **Behavior-aware Message Construction**
Based on the multi-behavior graph \(G\), we first generate the propagated information for the user(\(u_{i}\))-item(\(v_{j}\)) pair with the following behavior-aware message passing paradigm:
\[\textbf{H}_{i\leftarrow}^{k,(l+1)}=\lambda(\{\textbf{E}_{j}^{(l)}: x_{i,j}^{k}=1\})\] \[\textbf{H}_{j\leftarrow}^{k,(l+1)}=\lambda(\{\textbf{E}_{i}^{(l)}: x_{i,j}^{k}=1\}) \tag{1}\]
where \(\lambda(\cdot)\) denotes the behavior-aware encoding function for preserving the semantics of individual type of interactive behavior for \(u_{i}\) and \(v_{j}\). Here, \(\textbf{H}_{i\leftarrow}^{k,(l+1)}\in\mathbb{R}^{d}\) and \(\textbf{H}_{j\leftarrow}^{k,(l+1)}\in\mathbb{R}^{d}\) with dimensionality of \(d\) (output from \(\lambda(\cdot)\) function) are the \((l+1)\)-th layer representation which preserves the characteristics of the behavior type of \(k\). In the first-layer propagation, we generate the input feature vector \(\textbf{E}_{i}^{0}\) and \(\textbf{E}_{j}^{0}\) of user \(u_{i}\) and item \(v_{j}\) with the Autoencoder-based pre-training [25] over multi-behavior interaction tensor **X**, to project different types of high-dimensional behavior embeddings into low-dimensional latent space. The reasons for using Autoencoder
\begin{table}
\begin{tabular}{c|c} \hline Notations & Description \\ \hline \(U\) (\(u_{i}\in U\)) & Set of users \\ \(V\) (\(v_{i}\in V\)) & Set of items \\ \(K\) (indexed by \(k\)) & Number of behavior types \\ \(\textbf{X}\in\mathbb{R}^{I\times J\times K}\) & Multi-behavior interaction tensor \\ \(\textbf{H}_{i\leftarrow}^{k,(l)}\), \(\textbf{H}_{j\leftarrow}^{k,(l)}\) & Behavior-aware representations of users/items \\ & Behavior-aware encoding function \\ \(\hat{\textbf{H}}_{i\leftarrow}^{k,(l)}\) & Recharlted type-specific behavior embedding \\ \(\hat{\textbf{\beta}}_{k}\) & Attention value for type-specific behavior \\ \(\psi^{(l)}\) & Cross-behavior message aggregation function \\ \(G=\{U,V,\mathcal{E}\}\) & Multi-behavior high-order graph \\ \(\hat{\textbf{E}}_{i}^{(l)}\), \(\hat{\textbf{E}}_{j}^{(l)}\) & Cross-layer fused embedding \\ \(\Gamma_{i,j}\) & Fused representation for prediction \\ \hline \end{tabular}
\end{table} TABLE I: Summary of Key Notations
Fig. 2: Multi-behavior graph dependency modeling.
is to generate more informative initial embeddings for users and items, through the auto-encoding training paradigm.
In the multi-behavior recommendation, various types of users' behaviors reflect the preference from behavior-specific characteristics. For example, view-interactive behavior happens more frequently than add-to-cart and add-to-favorite activities [9]. Additionally, add-to-favorite interactive behavior could provide rich information to characterize users' implicit interests over items, although the purchase behavior may be postponed and does not happen right away [2]. Hence, we design our semantic encoding function \(\lambda(\cdot)\), to capture the type-specific behavior contextual information in the message construction process.
**Multi-Channel Behavior Embedding Layer**. Inspired by recent advancement of memory-augmented neural network models in multi-dimensional context learning [33], we build our behavior semantic encoder upon the multi-channel neural framework to learn customized representations for individual type of user-item interactions. Our multi-channel embedding layer utilizes the external representation units, in which each unit corresponds to a certain dimensional of behavior semantics (_e.g_., behavior data distributions with respect to different categories of items). Specifically, we integrate the channel-based projection layer with the behavior-aware attention mechanism to fuse the learned semantic information across different channels. For each behavior type \(k\), this module is firstly equipped with a contextual transformation layer to update the input user/item embeddings \(\mathbf{E}_{j}^{(l)}\) and \(\mathbf{E}_{j}^{(l)}\) in the \(l\)-th graph layer. Without loss of generality, we formally present the behavior semantic encoder with \(M\) (indexed by \(m\)) channels for the message of user \(u_{i}\) from his/her connected item nodes \(\{v_{j}|x_{i,j}^{k}=1\}\) under the behavior type of \(k\) as below:
\[\mathbf{H}_{i\leftarrow}^{k,(l+1)} =\sum_{m=1}^{M}\omega_{m}^{k}\mathbf{U}_{m}\sum_{x_{i,j}^{k}=1} \mathbf{E}_{j}^{(l)}\] \[\omega_{m}^{k} =\delta(\mathbf{K}\cdot\sum_{x_{i,j}^{k}=1}\mathbf{E}_{j}^{(l)} +\mathbf{b})(m) \tag{2}\]
where \(\mathbf{U}_{m}\in\mathbb{R}^{d\times d}\) is the \(m\)-th channel transformation, \(\omega_{m}^{k}\) is the \(m\)-th weight calculated from the neighboring nodes \(\{v_{j}|x_{i,j}^{k}=1\}\) of \(u_{i}\) under behavior type of \(k\). \(\sum_{m=1}^{M}\omega_{m}^{k}\mathbf{U}_{m}\) represents the learned behavior-type-specific contextual transformation. Additionally, \(\mathbf{K}\in\mathbb{R}^{M\times d}\) and \(\mathbf{b}\in\mathbb{R}^{M}\) are transformation and bias parameters to calculate the weights, and \(\delta(\cdot)\) denotes ReLU activation. Similar semantic encoder can be applied to learn embedding \(\mathbf{H}_{j\leftarrow}^{k,(l+1)}\) between item \(v_{j}\) and its connected users \(\{u_{i}|x_{i,j}^{k}=1\}\) with the \(k\)-th behavior type. By performing the dimension-wise relation learning for each type of behavior patterns, the underlying semantic information can be smoothly captured with multi-channel behavior embedding in our framework.
#### Iii-B2 **Behavior Inter-dependency Modeling**
In addition to encoding the type-specific behavior semantics, another key aspect of multi-behavior relation learning lies in exploring the inter-dependencies across different types of user-item interactive behavior. For instance, add-to-cart and tag-as-favorite activities are good indicators for purchase of users. Hence, after the message propagating process with the learned representations \(\mathbf{H}_{i\leftarrow}^{k,(l+1)}\) and \(\mathbf{H}_{j\leftarrow}^{k,(l+1)}\), it is crucial to consider the inter-dependencies among different types of behaviors (_i.e_., \(k\in[1,...,K]\)), and further refine the type-specific behavior embeddings propagated from other behavior types.
Motivated by the recent success of transformer networks in distilling inter-correlations between entities [52], we develop a self-attentive network for multi-behavior inter-dependency modeling. Based on the paradigm of self-attention layer with scaled dot-product attention, we create three transformation matrices to project input representation into three latent dimensions, _i.e_., \(\mathbf{Q}^{h}\in\mathbb{R}^{\frac{d}{6}\times d}\), \(\mathbf{V}^{h}\in\mathbb{R}^{\frac{d}{6}\times d}\) and \(\mathbf{K}^{h}\in\mathbb{R}^{\frac{d}{6}\times d}\) as the query, value and key transformations for each of the \(C\) (indexed by \(c\)) attention heads. Then, the dependency scores between the \(k\)-th and the \(k^{\prime}\)-th behavior message is calculated in the dot-product manner as follows:
\[\alpha_{k,k^{\prime}}^{c} =\frac{(\mathbf{Q}^{c}\mathbf{H}_{i\leftarrow}^{k,(l+1)})^{\top} \cdot(\mathbf{K}^{c}\mathbf{H}_{i\leftarrow}^{k^{\prime},(l+1)})}{\sqrt{\frac {d}{C}}}\] \[\hat{\alpha}_{k,k^{\prime}}^{c} =\frac{\exp\alpha_{k,k^{\prime}}^{c}}{\sum_{k^{\prime}=1}^{k,(l+ 1)}\exp\alpha_{k,k^{\prime}}^{c}} \tag{3}\]
where \(\hat{\alpha}_{k,k^{\prime}}^{c}\) is the learned quantitative correlation weight between the behavior type \(k\) on \(k^{\prime}\), which is calculated from the intermediate score \(\alpha_{k,k^{\prime}}^{c}\) by the softmax function. To expand the model ability in capturing cross-behavior dependency from different hidden dimensions (_e.g_., co-occurrence frequencies and regularties), we augment the self-attention layer with the multi-head mechanism to endow the attentive relation learning under multiple representation subspaces. In such cases, we associate each head-specific attention component with the a set of Query (\(\mathbf{Q}^{c}\)), Key (\(\mathbf{K}^{c}\)) and Value (\(\mathbf{V}^{c}\)) weight matrices. Based on the head-specific correlation scores, we then recalibrate the type-specific behavior message by concatenating the multiple attention heads:
\[\tilde{\mathbf{H}}_{i\leftarrow}^{k,(l+1)} =\text{MH-Att}(\mathbf{H}_{i\leftarrow}^{k,(l+1)})\] \[=\left|\left|\sum_{c=1}^{K}\hat{\alpha}_{k^{\prime}=1}^{h} \mathbf{V}^{c}\cdot\mathbf{H}_{i\leftarrow}^{k^{\prime},(l+1)}\right. \tag{4}\]
where \(\left|\left|\right.\right.\) denotes the vector concatenation, and \(\tilde{\mathbf{H}}_{i}^{k,(l+1)}\) is the recalibrated message propagated to the target node \(u_{i}\) under behavior type \(k\). To preserve the original type-specific behavior message and prevent the gradient vanishing issue, we perform the element-wise addition between the original and recalibrated type-specific message with a residual connection scheme. That is, \(\tilde{\mathbf{H}}_{i\leftarrow}^{k,(l+1)}=\tilde{\mathbf{H}}_{i\leftarrow}^{k,(l+1)}+\mathbf{H}_{i\leftarrow}^{k,(l+1)}\) is the refined type-specific behavior representation which preserves the cross-type behavior inter-dependent information.
#### Iii-B3 **Personalized Multi-Behavior Aggregation**
With the incorporation of both the type-specific behavior semantics and type-wise inter-correlations into our multi-behavior dependency modeling module, we introduce an aggregation layer to fuse interactive patterns across different types of behaviors. Formally, we define the message aggregation function as:
\[\mathbf{E}_{i}^{(l+1)}=\psi(\{\tilde{\mathbf{H}}_{i\leftarrow}^{k,(l+1)}:k=[1,2,...,K]\}) \tag{5}\]
In real-life online platforms (_e.g._, online retailing or review sites), item interactive patterns may vary by users due to their different behavior preferences. For example, some people prefer to add their interested items into the favorite list but sporadically make purchases, while the add-to-favorite action is more likely to be followed by the purchase behavior. Therefore, to aggregate message from different types of behavior embeddings and obtain expressive representations on the local user-item multi-behavior interaction graph, it is essential to identify the contribution of different types of behavior in assisting the final prediction on the target type of user behavior in a customized manner.
Towards this end, with the recalibrated type-specific behavior message \(\hat{\mathbf{H}}_{i\leftarrow}^{k,(i+1)}\), the multi-behavior graph encoder is further equiped with an attention network to differentiate the influences of different behavior types for user \(u_{i}\). Specifically, the message aggregation module adopts a two-layer feed-forward network for weight estimation:
\[\beta_{k} =\mathbf{w}_{2}^{\top}\cdot\delta(\mathbf{W}_{1}\hat{\mathbf{H}} _{i\leftarrow}^{k,(l+1)}+\mathbf{b}_{1})+b_{2}\] \[\hat{\beta}_{k} =\frac{\exp\beta_{k}}{\sum_{k^{\prime}=1}^{K}\exp\beta_{k^{\prime}}} \tag{6}\]
where \(\mathbf{W}_{1}\in\mathbb{R}^{d^{\prime}\times d}\), \(\mathbf{b}_{1}\in\mathbb{R}^{d^{\prime}}\), \(\mathbf{w}_{2}\in\mathbb{R}^{d^{\prime}}\), and \(b_{2}\in\mathbb{R}\) are transformations and bias vectors for weights calculation, and \(d^{\prime}\) is the dimensionality of the intermediate hidden layer. \(\delta\) is the ReLU activation function. \(\beta_{k}\) is the intermediate attention value and \(\hat{\beta}_{k}\) is the final influence weight of interaction type \(k\) calculated using softmax function. With the calculated weights, the message aggregation layer then applies perform the weighted summation over type-specific behavior embeddings \(\mathbf{H}_{i\leftarrow}^{k,(l+1)}\), so as to acquire the encoded node embeddings \(\mathbf{E}_{i}^{(l+1)}=\sum_{k=1}^{K}\hat{\beta}_{k}\hat{\mathbf{H}}_{i \leftarrow}^{k,(l+1)}\) corresponding to the \(l\)-th layer graph encoder. With the developed multi-behavior modeling component, we endow the MBRec with the capability of modeling of type-specific behavioral semantics and cross-type dependencies.
### _High-order Mutual Relation Learning_
#### Iii-B1 **High-order Multi-Behavior Pattern Propagation**
With the multi-behavior pattern representation performed by first-order dependency learning, we can enable the high-order multi-behavior relational structures on the global graph \(G=\{U,V,\mathcal{E}\}\) by stacking more embedding propagation layers (as shown in Figure 3). Based on our developed multi-behavior graph encoder, in the high-order information propagation process, MBRec is capable of capturing the high-order collaborative signals across different types of user-item interactive relations. After performing the multi-behavior graph encoder \(l\) times, each node (_i.e._, \(u_{i}\) or \(v_{j}\)) could receive the messages propagated from its neighbors with \(l\)-th hop distance, which is presented as follows:
\[\mathbf{E}_{u}^{(l+1)} =\psi(\text{MH-Att}(\lambda(\{\mathbf{E}_{j}^{(l)}:x_{i,j}^{k}=1 \})))\] \[=\sum_{k=1}^{K}\hat{\beta}_{k}\cdot\text{MH-Att}(\mathbf{X}^{k} \mathbf{E}_{v}^{(l)}\sum_{m=1}^{M}\omega_{m}^{k}\mathbf{U}_{m}^{\top}) \tag{7}\]
where \(\mathbf{X}^{k}\in\mathbb{R}^{I\times J}\) is the adjacent matrix under the \(k\)-th behavior type, \(\mathbf{E}_{u}^{(l+1)}\in\mathbb{R}^{I\times d}\) and \(\mathbf{E}_{v}^{(l)}\in\mathbb{R}^{J\times d}\) refer to the embedding matrix of users and items, respectively.
While the high-order connectivity is exploited in the multi-behavior graph \(G\), the message propagation process with the rigid order hinders the representation power in learning the global graph-structured behavior dependencies. In specific, the current graph-based message passing architecture maps the input multi-behavior user-item interactions into a fixed length embedding with the highest \(L\)-th layer, which results in the information loss of previous layer-specific aggregated patterns. For example, a user can interact with different items with direct (one-hop) or indirect (multi-hop) connected relationships. In such case, it is crucial to identify the most relevant signals across both low- and high-order connections, for characterizing the preference of this target user. To overcome this limitation and free the message passing graph neural architecture from fixed-length internal representation, we introduce a cross-layer mutual relation learning module, to endow our graph neural architecture MBRec with the capability to focus on certain parts of graph-structured behavioral patterns across different-order connectivity.
#### Iii-B2 **Cross-Layer Mutual Relation Modeling**
To remedy the existing shortcomings, we propose to model the importance between users' and items' multi-order embeddings with a multi-head self-attention network. For user \(u_{i}\)'s multi-order embeddings \(\mathbf{E}_{i}^{(0)},...,\mathbf{E}_{(1)}^{(1)},...,\mathbf{E}_{(L)}^{(L)}\) (\(\mathbf{E}_{1}^{(1)}\in\mathbb{R}^{d}\)) and item \(v_{j}\)'s embeddings \(\mathbf{E}_{j}^{(0)},...,\mathbf{E}_{j}^{(1)},...,\mathbf{E}_{j}^{(L)}\) (\(\mathbf{E}_{j}^{(1)}\in\mathbb{R}^{d}\)) learned by the stacked multi-behavioral graph encoders, we first normalize the multi-order embeddings for cross-layer embedding fusion with the following operation:
\[\hat{\mathbf{E}}_{i}^{(l)}=\frac{\mathbf{E}_{i}^{(l)}}{\sqrt{\|\mathbf{E}_{i}^ {(l)}\|_{2}^{2}}};\ \ \ \hat{\mathbf{E}}_{j}^{(l)}=\frac{\mathbf{E}_{j}^{(l)}}{\sqrt{\|\mathbf{E}_{j}^{(l )}\|_{2}^{2}}} \tag{8}\]
Based on the block of multi-head self-attention mechanism, we calculate \(C\) (corresponds to \(C\) heads) importance matrices \(\phi^{c}\in\mathbb{R}^{(L+1)\times(L+1)}\) for adaptive multi-order combination.
Fig. 3: High-order propagation with layer-wise mutual relation learning for multi-behavior representations.
With the design of 2-D matrix \(\phi^{c}\), we can capture the pairwise mutual relation across different layer-specific multi-behavior patterns. We present this process as follows:
\[\phi^{c}_{l,l^{\prime}}=\delta((\textbf{P}^{c}\hat{\textbf{E}}^{l}_{i})^{\top} \cdot(\textbf{P}^{c}\hat{\textbf{E}}^{l^{\prime}}_{j})) \tag{9}\]
where \(\phi^{c}_{l,l^{\prime}}\) is the \(c\)-th importance score for the combination between the \(l\)-th layer user embedding and \(l^{\prime}\)-th layer item embeddings. \(\textbf{P}^{c}\in\mathbb{R}^{\frac{d}{\mathcal{N}}\times d}\) is the transformation to acquire key vectors, and \(\delta(\cdot)\) is the ReLU activation function to learn non-linearities of feature interactions. Based on the learned importance scores, we then generate the fused representation for prediction as follows:
\[\boldsymbol{\Gamma}_{i,j}=\Big{|}\Big{|}\sum_{c=1}^{C}\sum_{l=0}^{L}\sum_{l^{ \prime}=0}^{L}\phi^{h}_{l,l^{\prime}}(\textbf{T}^{c}\hat{\textbf{E}}^{(l)}_{i })\circ(\textbf{T}^{c}\hat{\textbf{E}}^{(l^{\prime})}_{j}) \tag{10}\]
where \(\circ\) denotes the element-wise production, and \(\textbf{T}^{c}\in\mathbb{R}^{\frac{d}{\mathcal{N}}\times d}\) is the value transformation. The \(C\) head-specific representations are concatenated to generate the fused representation \(\boldsymbol{\Gamma}_{i,j}\in\mathbb{R}^{d}\), which is fed into a feed-forward network to make forecasting on the unknown user-item interaction with the target behavior type \(k\):
\[\textbf{Pr}_{i,j}=\textbf{w}_{4}^{\top}(\delta(\textbf{W}_{3}\boldsymbol{ \Gamma}_{i,j}+\textbf{b}_{3})+\boldsymbol{\Gamma}_{i,j}) \tag{11}\]
where \(\textbf{W}_{3}\in\mathbb{R}^{d\times d}\), \(\textbf{w}_{4}\in\mathbb{R}^{d}\) and \(\textbf{b}_{3}\in\mathbb{R}^{d}\) are network parameters, \(\delta(\cdot)\) is the ReLU activation. Note that a residual connection is employed for better gradients propagation.
``` Input: seed users \(\mathbb{U}\), seed items \(\mathbb{V}\), adjacent tensor \(\textbf{X}\in\mathbb{R}^{l\times J\times K}\), sampling depth \(D\), sampling number per step \(N\) Output: sampled users \(\mathbb{U}\), sampled items \(\hat{\mathbb{V}}\), adjacent matrix of the sampled sub-graph \(\hat{\textbf{X}}\)
1 Initialize the normalized adjacent matrix \(\bar{\textbf{X}}\in\mathbb{R}^{l\times J}\) with \(\bar{\textbf{X}}_{i,j}=\frac{\|\textbf{X}_{i,j}\|_{1}}{\sqrt{\|\textbf{X}_{i }\|_{1}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2} \|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2} \|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2}\|_{2} \|_{2}
and the cost for model training and inference. As described in Algorithm 1, the major cost of the former process is \(O(D\times N\times(I+J))\) for updating the sampling probability \(P_{u}\) and \(P_{v}\), where \(D\) is the number of sampling steps and \(N\) is the number of sampled nodes per step. In the model running phase, MBRec takes \(O(L\times|\mathbf{X}|\times d)\) (\(|\mathbf{X}|\) denotes the number of non-zero elements in \(\mathbf{X}\)) to encode the type-specific message, in which \(O(L\times(I+J)\times K\times d^{2})\) is needed by the attention module. The complexity of the type-wise inter-dependency modeling and the aggregation layer is analogously \(O(L\times(I+J)\times K\times d^{2})\), in which the primary contributor is the matrix-multiplications. The complexity of the cross-order mutual relation learning is \(O(|\mathbf{X}|\times L^{2}\times d^{2})\) which comes from the order-wise representation fusion, and this term dominates the complexity of the model running process. Empirically, by sharing the sampled sub-graph among training/testing steps, the sub-graph sampling costs much less time compared to the entire computational cost.
```
Input: multi-behavior interaction tensor \(\mathbf{X}\in\mathbb{R}^{I\times J\times K}\), initial node embeddings \(\mathbf{E}^{(0)}\), the number of graph layer \(L\), the number samples for training \(S\), weight \(\lambda\) for regularization, the number of epochs \(E\) Output: trained model parameters \(\mathbf{\Theta}\)
1 hyperparameter Initializations \(\mathbf{\Theta}\)
2for\(e=1\) to \(E\)do
3 Seed node sampling \(\mathbb{U}\), \(\mathbb{V}\) sub-graph generation \((\tilde{\mathbb{U}},\tilde{\mathbb{V}},\tilde{\mathbf{X}})\) using the seeds according to Algorithm 1
4 Get \(\mathbf{E}^{(0)}\) from \(\tilde{\mathbf{E}}^{(0)}\) for \(u_{i}\) in \(\mathbb{U}\) and \(v_{j}\) in \(\mathbb{V}\)
5for\(l=1\) to \(L\)do
6foreach\(u_{i}\) in \(\hat{\mathbb{U}}\), \(v_{j}\) in \(\hat{\mathbb{V}}\) and \(k=1\) to \(K\)do
7 Type-specific behavior message generation \(\mathbf{H}^{k}\)
8 Message refinement \(\mathbf{\hat{H}}^{k}\)
9 Embedding aggregation \(\mathbf{E}^{(l)}\)
10
11 end for
12
13 end for
14\(\mathcal{L}=\lambda\|\mathbf{\Theta}_{\mathbf{\Theta}}^{2}\|\)
15for each\(u_{i}\) in \(\hat{\mathbb{U}}\)do
16 Positive and negative items are sampled from \(\hat{\mathbb{V}}\)
17for each\(v_{p_{s}}\) and \(v_{n_{s}}\)do
18 Interaction probability inference \(\text{Pr}_{i,j}\).
19\(\mathcal{L}+=\max(1-\text{Pr}_{t,p_{s}}+\text{Pr}_{t,n_{s}})\)
20 end for
21
22 end for
23 Model training with the optimized objective. \(\mathcal{L}\)
24 end for
25return\(\mathbf{\Theta}\)
```
**Algorithm 2**Model Optimization of MBRec
**Space Complexity.** Due to sampling larger sub-graphs for computing efficiency and data integrity, MBRec takes more memory than some GNN model in sub-graph sampling. But the memory cost is fully acceptable for common devices. For the model memory cost, the space complexity of MBRec is \(O(L\times(I+J)\times K\times d)\), which is mainly for the intermediate hidden states, the same as a common graph neural networks (_e.g_. GCN and GraphSAGE) for modeling multi-behavior data.
## IV Evaluation
We evaluate MBRec to answer the research questions as:
* **RQ1**: How does our _MBRec_ perform compared with various recommendation baselines on different datasets?
* **RQ2**: How does each model design (_e.g_., multi-channel behavior embedding layer and cross-layer mutual relation learning module) affect the model performance?
* **RQ3**: What is the impact of incorporating different types of behaviour context in our graph neural multi-behavior recommender system?
* **RQ4**: How do different interaction sparsity degrees influence the recommendation performance?
* **RQ5**: How do the key hyperparameters impact the performance of _MBRec_ neural architecture?
* **RQ6**: How is the model efficiency of _MBRec_ when competing with various types of recommendation techniques?
* **RQ7**: How does the user multi-behavior dependency study benefit the interpretation ability for recommendation?
* **RQ8**: What is the effect of the graph sampling algorithm on the model performance of _MBRec_?
### _Data Description_
Our evaluations are performed on three real-world datasets: Tmall, BeiBei and IJCAI-Competition. We summarize the detailed statistical information of those datasets in Table II.
* **Tmall**. This is a public recommendation dataset from the Tmall e-commerce platform by including four types of user behaviors: click, add-to-cart, tag-as-favorite and purchase. This data contains 47,894 users and 99,037 items.
* **BeiBei**. This is another e-commerce dataset for item recommendation from one of the largest infant product retail site in China. There are 21,716 users and 7,977 items in this dataset with three types of user-item interactions, namely, click, add-to-cart and purchase.
* **IJCAI-Competition**. This data comes from the released repository of IJCAI competition to provide researchers with user online behavior modeling. It involves four types of interaction behavior between user and item, _i.e_., click, add-to-cart, tag-as-favorite and purchase. 423,423 users and 874,328 items are included in this data source.
To be consistent with the settings in [44, 16], the target predicted behaviors in our recommendation scenario are user purchases and other types of behaviors (_e.g_., click, add-to-cart) are regarded as the auxiliary behaviours.
### _Evaluation Metrics_
In our experiments, the models are evaluated on the top-\(N\) item recommendation task with the metrics of Hit Ratio (HR)@\(N\) and NDCG@\(N\). In our evaluation protocol, we use the leave one item out strategy [55] to consider the last
\begin{table}
\begin{tabular}{l l l l} \hline \hline Dataset & User \# & Item \# & Interaction \# & Interactive Behavior Type \\ \hline Tmall & 147894 & 99037 & 7658926 & \{Page View, Favorite, Cart, Purchase\} \\ BeiBei & 21716 & 7977 & 3333086 & \{Page View, Cart, Purchase\} \\ DeCI & 423423 & 874328 & 36203512 & \{Page View, Favorite, Cart, Purchase\} \\ \hline \hline \end{tabular}
\end{table}
Table II: Statistics of our evaluation datasets.
interaction with the target behavior of each user as the testing set. In particular, following the similar settings in [17, 29], for individual user, we sample 99 items as negative instances from the set of all non-interacted items. Items in the test set are regarded as the positive instances.
### _Baseline Models_
To demonstrate the effectiveness of our _MBRec_ framework, we compare our method with the following state-of-the-art methods, which involves different categories:
**Conventional Matrix Factorization Method**:
* **BiasMF**[19]: this model attempts to incorporate user and item bias information into the matrix factorization, so as to learn latent embeddings of users/items.
**Neural Collaborative Filtering**:
* **NCF**[11]: it augments the embedding paradigm in collaborative filtering with the multilayer perceptron to enable the non-linear feature interactions.
* **DMF**[47]: this is another neural collaborative filtering technique, to learn a common low dimensional space for users and items with non-linear transformations.
**Autoencoder-based Recommendation Models**:
* **AutoRec**[25]: this recommendation model stacks multiple autoencoder layers to project user-item interaction inputs into the latent representations for data reconstruction.
* **CDAE**[41]: It is a model-based CF recommender with the denoising auto-encoder technique to learn user correlations.
**Neural Auto-regressive Recommender Systems**:
* **NADE**[58]: it designs a neural autoregressive architecture for recommendation task with the parameter sharing between different ratings.
* **CF-UiCA**[6]: this is an user-item co-autoregressive framework with a new stochastic learning strategy to encode correlations between users and items.
**Graph Neural Network-based Recommendation Methods**:
* **ST-GCN**[53]: this graph-based method is built over an encoder-decoder framework to perform the convolution-based embedding propagation between user and item nodes.
* **NGCF**[34]: it is a state-of-the-art GNN-based collaborative filtering model which exploits the high-order user-item interaction structures.
**Multi-Behavior Recommender Systems**:
* **NMTR**[7]: This method relies on the defined cascaded behavior relationships for encoding the multi-behavior semantics with a multi-task learning scheme.
* **DIPN**[9]: this deep intent prediction network aims to integrate the browsing and buying preferences of users with a new type touch-interactive behavior patterns.
* **NGCF+M**[34]: we generate a new multi-behavior recommendation variant of NGCF by injecting the multi-behavior context into the message passing scheme.
* **MATN**[44]: this recommendation model considers the influences among different types of interactions with attentive weights for pattern aggregation.
* **GNMR**[43]: this is the previous version of our _MBRec_ which captures the pairwise dependencies between different types of behaviors with the integration of the multi-channel behavior representation layer and self-attention network for relation aggregation. However, it ignores the layer-wise embedding dependency during the representation integration.
* **MBGCN**[16]: this multi-behavior recommender system leverages the graph convolutional network to capture the multi-behaviour patterns over the interaction graph.
### **Parameter Settings**
Our _MBRec_ model is implemented with TensorFlow. The parameter inference is conducted with the Adam optimizer and the training phase is performed with the learning rate of \(1e^{-3}\) and batch size of 32. For the model hyperparameters, the dimensionality of hidden state \(d\) is set as 16 in our representation space. The number of channels for behavior embedding layer is set as 8. We use 2 attention-based representation heads in our behavior inter-dependency modeling component. To alleviate the overfitting issue, the regularization strategy with the weight decay parameter sampled from {0.05, 0.01, 0.005, 0.001}.
### _Performance Comparison (RQ1)_
The evaluation results (measured by HR@10 and NDCG@10) of all compared methods on three datasets are shown in Table III. In all cases, we could observe that _MBRec_
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{\(\overline{o_{5}}\)} & \multicolumn{2}{c|}{\(\overline{o_{10}}\)} & \multicolumn{2}{c|}{\(\overline{o_{20}}\)} & \multicolumn{2}{c|}{\(\overline{o_{50}}\)} \\ \cline{2-10} & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline \hline BiashMF & 0.453 & 0.287 & 0.588 & 0.333 & 0.678 & 0.357 & 0.807 & 0.379 \\ \hline NCF & 0.447 & 0.283 & 0.601 & 0.336 & 0.698 & 0.359 & 0.819 & 0.383 \\ \hline NCF & 0.471 & 0.496 & 0.337 & 0.634 & 0.372 & 0.743 & 0.381 & 0.872 & 0.407 \\ \hline NDCG & 0.498 & 0.337 & 0.642 & 0.376 & 0.740 & 0.398 & 0.902 & 0.429 \\ \hline AutoRec & 0.456 & 0.291 & 0.607 & 0.341 & 0.707 & 0.366 & 0.826 & 0.391 \\ \hline MATN & 0.467 & 0.330 & 0.625 & 0.385 & 0.667 & 0.342 & 0.833 & 0.396 \\ \hline _MBRec_ & **0.527** & **0.389** & **0.670** & **0.402** & **0.788** & **0.433** & **0.927** & **0.461** \\ \hline \end{tabular}
\end{table} TABLE IV: Recommendation accuracy with different Top-\(N\) values in terms of _HR@N_ and _NDCG@N_ on BeiBei dataset.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data} & Metric & BiasMF & DMF & NCF & AutoRec & CDAE & NADE & CF-UiCA & ST-GCN & NCF & NMTR & DIPN & NCF+M & MBGCN & MATN & GNMR & _MBRec_ \\ \hline \multirow{4}{*}{Bei} & HR & 0.588 & 0.597 & 0.959 & 0.607 & 0.608 & 0.608 & 0.610 & 0.609 & 0.611 & 0.613 & 0.631 & 0.634 & 0.642 & 0.626 & 0.631 & **0.670** \\ \cline{2-16} & Imprw & 13.957 & 12.529 & 12.614 & 10.386 & 10.209 & 0.1026 & 0.296 & 0.606 & 3.906 & 0.184 & 5.686 & 4.366 & 7.039 & 6.188 & - \\ \cline{2-16} & NDCG & 0.333 & 0.336 & 0.332 & 0.341 & 0.341 & 0.343 & 0.346 & 0.343 & 0.375 & 0.349 & 0.384 & 0.372 & 0.376 & 0.385 & 0.380 & **0.402** \\ \cline{2-16} & Imprw & 20.725 & 19.642 & 12.087 & 17.897 & 17.896 & 17.209 & 16.188 & 17.709 & 7.200 & 15.199 & 4.699 & 8.066 & 6.915 & 4.425 & 5.799 & - \\ \cline{2-16} & HR & 0.262 & 0.305 & 0.319 & 0.313 & 0.329 & 0.317 & 0.332 & 0.347 & 0.302 & 0.332 & 0.317 & 0.354 & 0.369 & 0.354 & 0.424 & **0.444** \\ \hline \multirow{4}{*}{Transl} & Imprw & 69.474 & 35.746 & 39.184 & 41.85\% & 34.95\% & 40.06\% & 33.73\% & 27.95\% & 47.02\% & 33.73\% & 40.06\% & 18.72\% & 20.33\% & 25.425 & 4.726 & - \\ \cline{2-16} & NDCG & 0.153 & 0.1189 & 0.191 & 0.190 & 0.191 & 0.198 & 0.206 & 0.188 & 0.179 & 0.178 & 0.221 & 0.222 & 0.229 & 0.399 & **0.292** \\ \cline{2-16} & Imprw & 17.24\% & 38.62\% & 37.17\% & 3.89\% & 33.67\% & 37.17\% & 32.32\% & 27.18\% & 41.62\% & 46.37\% & 47.19\% & 18.55\% & 18.00\% & 25.36\% & 25.26 & - \\ \hline \multirow{4}{*}{10CAI} & HR & 0.285 & 0.392 & 0.449 & 0.448 & 0.455 & 0.469 & 0.429 & 0.452 & 0.461 & 0.481 & 0.475 & 0.481 & 0.463 & 0.489 & 0.519 & **0.554** \\ \cline{2-16} & Imprw & 94.99\% & 41.39\% & 23.98\% & 23.66\% & 21.70\% & 18.12\% & 29.14\% & 22.57\% & 20.17\% & 18.18\% & 18.168\% & 19.65\% & 13.58\% & 13.58\% & 9.643 & 13.29\% & 6.74\% & - \\ \cline{2-16} & NDCG & 0.183 & 0.284 & 0.287 & 0.288 & 0.304 & 0.260 & 0.285 & 0.292 & 0.04 & 0.
consistently outperforms baseline methods from various research lines by a significant margin. We attribute such performance improvement to the joint learning of multi-behavior inter-dependencies as well as the cross-layer collaborative signals under graph neural network. For example, _MBRec_ makes over 34% and 33% relatively improvement with respect to HR@10 and NDCG@10 respectively, as compared to autoencoder-based recommendation models (_i_.\(e\)., AutoRec & CDAE) on Tmall data. Additionally, for the results in terms of HR@10 on IJCAI-Competition data, the constant gain achieved by the developed _MBRec_ is around 20-22% over graph neural network-based CF models (ST-GCN and NGCF), and 18-29% over neural auto-regressive recommendation methods (NADE and CF-UIcA).
The proposed _MBRec_ also outperforms all other baseline methods with the modeling of multi-behavior data with respect to all metrics. Results show that our _MBRec_ allows the graph neural architecture to capture the multi-behavior interaction patterns, and successfully distinguish the layer-wise representations. While MBGCN and NGCF+M are built over the graph neural network to model behavior correlations, they fall short in encoding the latent type-specific characteristics and cross-type behavior inter-dependencies simultaneously. The performance of NMTR and MATN are limited to the failure for considering the high-order collaborative effects over the multi-behavior interaction graph. Furthermore, our new version model _MBRec_ always achieves better recommendation accuracy than the simplified version GNMR, which also confirms the effectiveness of our designed component for cross-layer mutual relation modeling. We further evaluate the performance of our _MBRec_ and several representative baselines with different top-\(N\) positions. The results are reported in Table IV. The best performance is achieved by our framework under different settings. To further evaluate the performance of our MBRec framework, we make the performance comparison by varying the number of sample negative instances. The evaluation results are shown in Table V. We can observe that our MBRec method consistently outperforms other alternative methods under different settings of negative samples in the range of {400, 800, 1600, 3200, 6400}. This observation validates the superiority of our MBRec in advancing the recommendation performance with the effective modeling of high-order heterogeneous collaborative relationships.
### _Ablation Study (RQ2)_
In this section, we would like to answer the question that if the designed individual component could help improve the recommendation accuracy. Specifically, we generate four types of model variants of our _MBRec_ corresponding to different aspects of our model design:
\(\bullet\)**Impact of Multi-Channel Behavior Embedding**. To evaluate the effect of our multi-channel behavior embedding layer, we compare the proposed method with the variant (w/o-MCE). This variant discards the behavior semantic modeling with multi-channel representation spaces. As shown in Table VI about the evaluation on three datasets, we can observe that the results of _MBRec_ are better than that of the variant (w/o-MCE). It demonstrates that the encoding of type-specific behavior characteristic could facilitate the multi-behavior dependency modeling.
\(\bullet\)**Impact of Behavior Inter-dependency Modeling**. To investigate the rationality of our behavior inter-dependency modeling, our _MBRec_ is compared with another model implementation (w/o-BIM) by removing the multi-behavior attention network. From the results in Table VI, _MBRec_ outperforms w/o-BIM in all cases, which benefits from the user/item representation enhanced by the exploration of pairwise behavior relational structures.
\(\bullet\)**Impact of Behavior Pattern Fusion**. We generate another simplified implementation of our recommendation architecture: (w/o-BFu) that does not consider the aggregation layer for pattern aggregation across various types of behavior representations. Instead, the type-aware behavior representations are directly combined through the element-wise mean pooling. As expected, _MBRec_ achieves better recommendation accuracy as compared to the variant (w/o-BFu). It verifies the necessity of our embedding fusion scheme during our multi-behavior dependency modeling.
\(\bullet\)**Impact of High-order Mutual Relation Learning**. To evaluate the effect of augmenting the graph neural model by capturing the cross-layer collaborative relations, we generate another variant (w/o-HMR) by only generating the output from the highest graph order after the information propagation process. From the evaluation results, we can observe the efficacy of the designed mutual relation encoder in learning the contributions of order-specific embeddings for the final prediction result.
### _Analysis on Individual Behavior Context (RQ3)_
This section conducts ablation studies on the influence of type-specific behavior context for the recommendation performance. The compared model variants are generated with the rubric as: First, "+" behavior type means that merely considering the target behaviors into the system to make
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \# samples & 400 & 800 & 1600 & 3200 & 6400 \\ \hline Model & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline Biask & 0.285 & 0.140 & 0.149 & 0.078 & 0.091 & 0.049 & 0.055 & 0.032 & 0.036 & 0.022 \\ \hline NCF & 0.295 & 0.144 & 0.153 & 0.080 & 0.090 & 0.047 & 0.053 & 0.029 & 0.037 & 0.019 \\ \hline AuibRec & 0.245 & 0.122 & 0.134 & 0.074 & 0.0761 & 0.047 & 0.0500 & 0.033 & 0.036 & 0.025 \\ \hline ST-GCN & 0.311 & 0.183 & 0.184 & 0.093 & 0.104 & 0.053 & 0.058 & 0.031 & 0.036 & 0.019 \\ \hline MBGCN & 0.353 & 0.181 & 0.213 & 0.099 & 0.122 & 0.063 & 0.073 & 0.035 & 0.039 & 0.018 \\ \hline MATN & 0.339 & 0.165 & 0.192 & 0.093 & 0.113 & 0.056 & 0.058 & 0.031 & 0.037 & 0.020 \\ \hline GNMR & 0.345 & 0.176 & 0.218 & 0.100 & 0.103 & 0.055 & 0.063 & 0.040 & 0.024 \\ \hline _MBRec_ & **0.361** & **0.1848** & **0.129** & **0.199** & **0.124** & **0.064** & **0.075** & **0.040** & **0.044** & **0.025** \\ \hline \end{tabular}
\end{table}
Table V: Performance comparison with different number of negative samples in terms of _HR@10_ and _NDCG@10_
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline Data & Beibei Data & Tmall Data & \multicolumn{2}{c|}{IJCAI Data} \\ \hline Metrics & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline \hline w/o-MCE & 0.6549 & 0.3876 & 0.4399 & 0.2580 & 0.5420 & 0.3289 \\ w/o-BIM & 0.6696 & 0.4000 & 0.4391 & 0.2554 & 0.5358 & 0.3228 \\ w/o-BFu & 0.6572 & 0.3907 & 0.4238 & 0.2487 & 0.5494 & 0.3321 \\ w/o-HMR & 0.6169 & 0.3470 & 0.3856 & 0.2240 & 0.3445 & 0.1760 \\ \hline _MBRec_ & **0.6701** & **0.4021** & **0.4435** & **0.2624** & **0.5535** & **0.3376** \\ \hline \end{tabular}
\end{table}
Table VI: Ablation study on key components of MBRec.
predictions (_i.e_., +buy). Second, "-" behavior type indicates the removing of this certain type of user behaviors (_e.g_., -pv, -cart) from the recommendation architecture. For instance, -pv indicates that we do not include the page view behaviors into the interaction inter-dependency modeling. We present the evaluation results in terms of NDCG@N and HR@N when \(N=10\) on three real-world datasets in Figure 4. As shown in Table II, the number of behavior types is 3 (_i.e_., page view, add-to-cart, buy) on BeiBei data and 4 (_i.e_., page view, add-to-cart, tag-as-favorite, buy) on Tamil, IJCAI-Competition data. From the results, we can observe that each type of interaction behavior individually contributes to improve the user preference learning, and integrate multi-behavior behavior patterns for performance improvement.
### _Performance Under Different Sparsity (RQ4)_
In our experiments, we also evaluate the recommendation performance of different models under different interaction sparsity. Following the similar settings in [34, 38], we first partition users into five groups based on the number of interactions. For example, "\(<\)36" and "\(<\)52" indicate that users belong to this group have the number of interactions ranging from 1 to 35, and 36 to 51, respectively. We keep the same number of users in each group and select the corresponding ranges as shown in x-axis of Figure 5. The total number of users contained in each group and the recommendation accuracy with respect to HR (Figure 5 (a)) and NDCG (Figure 5 (b)) are shown in the left side and right side of y-axis in Figure 5. From evaluation results, we can notice the superiority of our _MBRec_ with different sparsity levels. It suggests that the incorporation of multi-typed behaviour patterns into the user preference learning could reach performance improvement as compared with other baselines. In addition, we can observe that the overall performance of all compared methods share similar increase trend as users have more interactions. This may indicate that more user behavior data may help characterize user preference with more accurate latent representations.
### _Analysis on Hyperparameters (RQ5)_
We study the impact of different hyperparameter settings on the model performance in our joint learning framework.
* **Comparison with Different Hidden Dimensionality**. Our model results with different dimension size of hidden states are shown in Figure 6. We observe that larger embedding size does not always bring the positive effect for improving model performance, especially for sparse experimented datasets. The larger size of hidden state dimensionality may lead to the overfitting issue. We set the hidden dimensionality \(d=16\) as the default value in our _MBRec_.
* **Comparison with Different Graph Model Depth**. To investigate the performance of our MBGNN method by stacking multiple graph neural layers, we conduct experiments by varying the number of graph-based embedding propagation layers. As shown in Table VII, we can observe that _MBRec_-2 and _MBRec_-3 obtain consistent improvement over _MBRec_-1 which merely considers the first-order neighbors for message passing. We attribute the performance improvement to the encoding of collaborative relations based on our considered second- and third-order neighboring node dependency. With the further increase of model depth from two to three graph layers, the performance slight degrades with the configuration of deep graph neural architecture. The reason may lie in that deep graph neural framework tends to be overfitting and involve the over-smoothing issue in the generated user/item representations. According to the statistical information from our experimented Tmall data, with the consideration of three-hop connections, a large percentage of user-item pairs may be connected, which unavoidably leads to the over-smoothing issue of making user embeddings indistinguishable.
* **Comparison with Different Number of Channels**. We vary the number of embedding channels in our multi-channel behavior embedding layer. The results in terms of HR@10 and NGCD@10 are presented in Table VIII, from which we notice that the performance of _MBRec_ is
Figure 4: Impact study of diverse behavior types. There are three types of behaviors for BeiBei data, and four types of behaviors for Tmall, IJCAI data.
Figure 5: Performance comparison of MBRec and baseline methods _w.r.t_ different data sparsity levels on Tmall data.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline Data & \multicolumn{2}{c|}{BeiBei Data} & \multicolumn{2}{c|}{Tmall Data} & \multicolumn{2}{c}{IJCAI Data} \\ \hline Metrics & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline \hline _MBRec_-1 & 0.662 & 0.394 & 0.383 & 0.226 & 0.533 & 0.313 \\ _MBRec_-2 & **0.670** & **0.402** & **0.444** & **0.262** & **0.554** & **0.338** \\ _MBRec_-3 & 0.664 & 0.398 & 0.408 & 0.237 & 0.543 & 0.332 \\ \hline \end{tabular}
\end{table}
Table VII: Effect of embedding propagation layers.
improved at first, with the increase of behavior representation channels. But we can observe that the recommendation performance degrades with the further increase of channel numbers, due to the overfitting. Hence, behavior embedding channels with the dimension of 16 is enough for encoding interaction semantics.
### _Computational Cost Analysis (RQ6)_
Our evaluation also includes the computational cost investigation of our _MBRec_ model and several representative methods in terms of their implementation time on different datasets. We evaluate the computational cost of all compared methods on the machine of NVIDIA TITAN RTX GPU with the configurations of Intel Xeon W2133 CPU 3.6G Hz and 64GB RAM. For fair comparison, we apply the same setting of hidden state dimensionality for all methods. For graph-based methods, the number of embedding propagation layers is set as 2. From the reported evaluation results in Table IX, we can observe that our _MBRec_ model can achieve comparable model efficiency compared with other baselines in terms of the implementation time. In particular, when competing with multi-behavior recommendation baselines (NGCF+M, MBGCN), our _MBRec_ requires less implementation time, which indicates the efficiency of our multi-behavior graph neural framework. Additionally, compare with the autoregressive collaborative filtering model-CF-UcA, our _MBRec_ can still achieve competitive model efficiency with the incorporation of multi-typed behaviour context. In summary, the above observations justify the scalability of our proposed _MBRec_ in dealing with large-scale user behavior data for recommendation.
### _Model Interpretation with User Study (RQ7)_
To analyze the multi-behavior dependency interpretation of our proposed MBRec framework, we conduct user studies with identified real user examples. We show the study results in Figure 7. In this figure, the cross-type behavior dependencies between user (\(u_{116}\), \(u_{1621}\)) and item (\(v_{13844}\), \(v_{64224}\)) are shown with the learned quantitative dependency weights. Specifically, \(E_{i}^{(1)}\) and \(E_{i}^{(2)}\) denote our produced user representation encoded from the \(1^{st}\) and \(2^{nd}\) graph-based embedding propagation layer, respectively. Similarly, \(E_{j}^{(1)}\) and \(E_{j}^{(2)}\) represents the encoded item embeddings from the \(1^{st}\) and \(2^{nd}\) message passing layers, respectively. From the study results, we summarize the key observations as follows:
* **Encoded Behavior Inter-Correlations**. In this user study, we present the learned behavior inter-correlation matrix with the dimension of \(\mathbb{R}^{4\times 4}\) to reflect the pairwise correlations between different types of user behaviors, \(i\)._e_., page view, add-to-cart, tag-as-favorite and purchase.
* **Type-specific Behavior Pattern Fusion**. In our MBRec recommendation framework, we design the multi-behavior pattern aggregation module with the aim of integrating type-specific behaviour patterns for making final recommendation. In particular, each user is associated with a learned attention-based behavior importance vector with the dimension of \(\mathbb{R}^{1\times 4}\) (as shown in Figure 7). For example, we can observe that users who view item \(v_{13844}\) are more likely to purchase it compared with item \(v_{64224}\).
* **Cross-layer Mutual Relation Encoding**. In our multi-layer graph neural framework, we introduce a mutual relation encoding component to explicitly aggregate representations from different hops in the multi-behavior interaction graph. \(E_{i}^{(1)}\) and \(E_{i}^{(2)}\) represents the encoded embeddings of user \(u_{i}\) from his/her first- and second-order neighboring nodes. In Figure 7, the correlations among hop-aware user/item representations are shown with different connection lines. From the visualization results, We can observe that cross-layer user/item embeddings (_e_.\(g\)., \(E_{i}^{(0)}\) and \(E_{j}^{(1)}\)) are often highly correlated with each other compared with the embeddings of the same layer (_e_.\(g\)., \(E_{i}^{(2)}\) and \(E_{j}^{(2)}\)).
### _Effect of Graph Sampling_
In this section, we investigate the effect of our graph sampling algorithm on the model performance by testing the prediction accuracy of MBRec with different number of training and testing sub-graphs. In specific, MBRec is trained with sub-graphs containing 5000, 10000, 20000, 40000 nodes, and is tested using input sub-graphs containing 5000, 10000,
\begin{table}
\begin{tabular}{l c c c} \hline \hline Models & BeiBei & Tmall & IJCAI \\ \hline NADE & 4.1s & 26.9s & 60.4s \\ CF-UcA & 11.5s & 61.7s & 139.1s \\ ST-GCN & 12.6s & 58.5s & 94.8s \\ NGCF+M & 15.8s & 74.6s & 152.3s \\ NMTR & 14.0s & 37.3s & 118.0s \\ MBGCN & 17.4s & 85.3s & 186.5s \\ MATN & 11.5s & 74.7s & 196.5s \\ DIPN & 53.2s & 172.6s & 284.6s \\ \hline _MBRec_ & 14.3s & 58.6s & 101.1s \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Computational cost (seconds) study.
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline Data & BeiBei & Data & Tmall & Data & IJCAI Data \\ \hline Metrics & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline \hline _MBRec_-2 & 0.655 & 0.390 & 0.438 & 0.260 & 0.518 & 0.308 \\ _MBRec_-4 & 0.656 & 0.401 & 0.443 & 0.260 & 0.553 & 0.328 \\ _MBRec_-8 & **0.670** & **0.402** & **0.444** & **0.262** & **0.554** & **0.338** \\ _MBRec_-16 & 0.646 & 0.387 & 0.419 & 0.241 & 0.558 & 0.336 \\ \hline \end{tabular}
\end{table} TABLE VIII: Effect of behavior embedding channels.
Fig. 6: Impact of hidden state dimensionality of MBRecframework on BeiBei, Tmall, IJCAI datasets.
20000, 40000, 60000 nodes. The results are shown in Table X, from which we can conclude that testing on larger sub-graphs always yields better performance, while training on larger sub-graphs does not always result in better performance. This is because training with smaller sub-graphs may serve as regularization operation for predictions.
## V Related Work
### _Neural Network Collaborative Filtering Models_
Collaborative Filtering (CF) has become one of the most important paradigms for personalized recommender systems in real-life platforms [32, 15]. The general idea of CF models is that users may share similar preference if they interact with similar items [12]. In recent years, many efforts have been made to augment the CF techniques with deep neural network models [27]. These methods apply different neural mechanisms (_e.g._, autoencoder, attention mechanism, graph neural network) in the matching function to parameterize users and items into latent representation space. The learned representations of users and items can be used to estimate the likelihood of unobserved interactions.
Some of studies follow this research line to enable the non-linear feature interactions with the multi-Layer feed-forward network, such as NCF [11] and DMF [47]. To consider item relational data into the CF model, the relational collaborative filtering (RCF [46]) framework designs neural two-stage attention mechanism to enhance the item embedding process. Another recent research line of recommendation models is to explore the user-item interaction graph to capture the collaborative filtering signals. For example, NGCF [34] is developed based on the high-hop information propagation framework to guide the user/item representation procedure. ST-GCN [53] is another graph learning model to encode user-item interaction patterns with an encoder-decoder framework. In addition, to bridge the logical reasoning and representation learning in recommender systems, a neural collaborative reasoning approach (NLR) [4] is proposed to incorporate the logic priors into the neural architecture.
### _Recommendation with Multi-Behavior Modeling_
There exist some research works aiming at enhancing recommendation models by considering multi-typed behavior of users [31]. In those methods, the implicit user-item feedback from auxiliary behaviors (_e.g._, click, add-to-cart) are considered as behavior contextual signals to predict target user behaviors (_e.g._, purchase) [16, 45]. For example, multi-task learning frameworks are developed to perform the joint training among the prediction tasks of different behavior types [7]. However, those methods reply on the predefined dependent relationships between different types of user behaviors, and can hardly be reflective of the complex multi-behaviour context in practical scenarios.
To capture the correlations between different types of behaviors, MATN [44] utilizes the attention network for multi-behavior information aggregation. Both browsing and buying behaviors of users are considered in DIPN [9] with an attention-based RNN model. However, the high-order behavior dependent structures have been overlooked in them. In this work, the proposed MBRec framework aims to encode the high-order collaborative signals in the embedding function. Additionally, graph-based methods have been designed to tackle the multi-behavior recommendation problem. Specifically, Zhang _et al_. [54] employs the multiplex network embedding technique to generate behavior-aware embeddings. MBGCN _et al_. [16] is built on the graph convolutional network to propagate the behavior embeddings over the interaction graph. Our new MBRec differs from those graph-based models from two perspectives: i) we discriminate the influence between various behaviour patterns through a dual-stage relation learning scheme. The designed new message passing paradigm endows the multi-behavior graph neural network with the capability of encoding behavior-aware characteristics and dependencies simultaneously. ii) The high-order collaborative signals are aggregated across different graph layers explicitly, under the cross-layer message passing architecture.
### _Graph Neural Networks for Recommendation_
In view of the effectiveness of Graph Neural Networks (GNNs), GNNs have been widely used to perform the representation learning over the graph-structured data [35, 42, 22, 5, 21, 30, 18]. Recent research works apply the graph neural network to model user-item interactions in recommender systems: PinSage [50] is a graph convolutional network to propagate embeddings over the pin-board bipartite
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{Train. \(N\)}} & \multicolumn{1}{c|}{\multirow{2}{*}{5,J000}} & \multicolumn{1}{c|}{\multirow{2}{*}{10,000}} & \multicolumn{1}{c|}{\multirow{2}{*}{20,000}} & \multicolumn{1}{c|}{\multirow{2}{*}{40,000}} & \multicolumn{1}{c|}{\multirow{2}{*}{60,000}} \\ \cline{3-10} \cline{6-10} \multicolumn{1}{|c|}{} & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG & HR & NDCG \\ \hline \hline
5,000 & 0.365 & 0.201 & 0.049 & 0.230 & 0.463 & 0.266 & 0.527 & 0.310 & 0.552 & 0.388 \\ \hline
10,000 & 0.359 & 0.198 & 0.407 & 0.229 & 0.464 & 0.266 & 0.529 & 0.307 & 0.552 & 0.336 \\ \hline
20,000 & 0.357 & 0.196 & 0.407 & 0.228 & 0.466 & 0.270 & 0.537 & 0.326 & 0.554 & 0.338 \\ \hline
40,000 & 0.322 & 0.168 & 0.367 & 0.197 & 0.424 & 0.236 & 0.500 & 0.292 & 0.548 & 0.330 \\ \hline \end{tabular}
\end{table}
Table X: Influence of the sub-graph sampling scale.
Figure 7: Interpretation study of multi-behavior inter-dependency in our _MBRec w.r.t_ behavior inter-dependency modeling, behavior pattern aggregation and cross-layer mutual relation learning. Dark color indicates higher relevance. Best viewed in color.
graph structure. Additionally, modeling the dynamic user-item interactions has attracted much attention for recommender systems [3, 48]. To encode the sequential patterns, graph neural networks have been utilized to consider the transitions between items of session sequences in SRGNN [40] and MTD [14], or user interaction sequence in H2SeqRec [20]. In addition, the graph diffusion network [38] and graph attention mechanism [39] have been utilized to capture the influence among users, so as to incorporate the social relations into the recommendation and alleviate the data sparsity issue.
## VI Conclusion
In this work, we contribute a new end-to-end framework (MBRec) for multi-behavior recommendation via the modeling of cross-behavior inter-dependencies under a high-order graph learning architecture. In our MBRec model, we first learn the dependent relationships among various types of user interactions with a behavior-aware message passing mechanism. Additionally, a designed high-order mutual relation learning scheme is integrated with the graph neural architecture, so as to encode the implicit dependencies between layer-specific behavior representations. When evaluated on three real-world datasets, our framework achieves significantly better recommendation performance as compared to various baselines. Further studies on model ablation show the rationality of designed key components in our proposed recommendation framework. In future, we would like to integrate the causal effect analysis [1] with our designed multi-behavior graph neural paradigm, in order to infer the causal relations from observed user behaviors and identify the implicit factors which influence user preference.
## Acknowledgments
We thank the reviewers for their valuable feedback and comments. This research work is supported by the research grants from the Department of Computer Science & Muske-teres Foundation Institute of Data Science at the University of Hong Kong (HKU). The research is also partially supported by National Nature Science Foundation of China (62072188), Major Project of National Social Science Foundation of China (18ZDA062), Science and Technology Program of Guangdong Province (2019A050510010).
|
2310.04521 | Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie
Algebras | This paper proposes an equivariant neural network that takes data in any
semi-simple Lie algebra as input. The corresponding group acts on the Lie
algebra as adjoint operations, making our proposed network adjoint-equivariant.
Our framework generalizes the Vector Neurons, a simple
$\mathrm{SO}(3)$-equivariant network, from 3-D Euclidean space to Lie algebra
spaces, building upon the invariance property of the Killing form. Furthermore,
we propose novel Lie bracket layers and geometric channel mixing layers that
extend the modeling capacity. Experiments are conducted for the
$\mathfrak{so}(3)$, $\mathfrak{sl}(3)$, and $\mathfrak{sp}(4)$ Lie algebras on
various tasks, including fitting equivariant and invariant functions, learning
system dynamics, point cloud registration, and homography-based shape
classification. Our proposed equivariant network shows wide applicability and
competitive performance in various domains. | Tzu-Yuan Lin, Minghan Zhu, Maani Ghaffari | 2023-10-06T18:34:27Z | http://arxiv.org/abs/2310.04521v3 | # Lie Neurons: Adjoint-Equivariant Neural Networks for Semisimple Lie Algebras
###### Abstract
This paper proposes an adjoint-equivariant neural network that takes Lie algebra data as input. Various types of equivariant neural networks have been proposed in the literature, which treat the input data as elements in a vector space carrying certain types of transformations. In comparison, we aim to process inputs that are transformations between vector spaces. The change of basis on transformation is described by conjugations, inducing the adjoint-equivariance relationship that our model is designed to capture. Leveraging the invariance property of the Killing form, the proposed network is a general framework that works for arbitrary semisimple Lie algebras. Our network possesses a simple structure that can be viewed as a Lie algebraic generalization of a multi-layer perceptron (MLP). This work extends the application of equivariant feature learning. As an example, we showcase its value in homography modeling using \(\mathfrak{sl}(3)\) Lie algebra.
## 1 Introduction
Respecting the symmetry in data is essential for deep learning models to understand the underlying objects. The fundamental rules of a process and the definition of a concept are typically buried in certain types of symmetries. For example, the conservation of mechanical energy is independent of the choice of inertial reference frame. The internal angles of an equilateral triangle are 60 degrees regardless of its orientation and size. Equivariant networks refer to deep learning models that decouple certain types of transformations from the variance in the data so that the essential characteristics can be better learned from the preserved symmetry. The most well-known example of equivariance in deep learning is the convolutional neural networks (CNNs), which are translation-equivariant, enabling stable image features regardless of the pixel positions in the image plane. CNNs are prevailing, and we usually only call models that extend equivariance beyond translation equivariant networks. Typical extensions include rotation (Cohen et al., 2017) and scale (Worrall and Welling, 2019) equivariance, while more general extensions are also explored (MacDonald et al., 2022).
Compared with non-equivariant models, which impose no structure on the input data, equivariant models regard the input data as elements in a vector space (typically represented as vectors) that carries certain types of transformation (typically represented as matrices). The set of transformations often belongs to a group. A group action on the data is realized through (left) matrix multiplication on the vectors, which can be viewed as a change of basis for the vectors. The equivariance property preserves such structures in both the input and output spaces. This is a general formulation that is true for most existing equivariant models and has gained success in various domains, including but not limited to the modeling of molecules (Thomas et al., 2018), physical systems (Finzi et al., 2020), social networks (Maron et al., 2018), images (Worrall et al., 2017), and point clouds (Zhu et al., 2023).
In this paper, we propose a new type of equivariant model that captures the symmetry in conjugacy classes. _We view the input data of our model not as elements in a vector space but as transformations (maps) between vector spaces, typically represented as matrices for linear groups_. Accordingly, group action on the input data stands for a change of basis on the transformations, which is also known as conjugation. The input data as transformations, when viewed as a continuous symmetry group, form a Lie group. To facilitate the numerical computation, the proposed network takes the
transformations to the Lie algebra. The Lie algebraic approach is particularly attractive as it enables working in vector spaces and exploiting its isometry structure to handle data in typical vector-form data. However, we reiterate that our network models the conjugation equivariance on transformation input or, equivalently, the adjoint equivariance on the corresponding Lie algebra. One way to intuitively understand the relations between existing equivariant networks and our proposed network is illustrated in Figure 1.
Overall, the contributions of our paper are summarized as follows:
* We propose a new adjoint-equivariant network architecture, enabling the processing of input data that represent transformations.
* We develop new network designs using the Killing form and the Lie bracket structure for equivariant activation and invariant layers for Lie algebraic representation learning.
* The proposed network models the equivariance of any semisimple Lie algebras, which relaxes the requirement in previous work Deng et al. (2021) for the Lie group to be compact.
* We open-source our code at: [https://github.com/UMich-CURLY/LieNeurons](https://github.com/UMich-CURLY/LieNeurons).
## 2 Related Work
Equivariant networks enable the model output to change in a predicted way as the input goes through certain transformations. This means that the model, by construction, generalizes over the variations caused by those transformations. Therefore, it reduces the sampling complexity in learning and improves the robustness and transparency facing input variations. Cohen and Welling (2016) initials the idea to generalize equivariance in deep learning models, realizing equivariance to 90-degree rotations in a 2D image plane using group convolution. This method works with discrete transformations by augmenting the input domain with a dimension for the set of transformations. The approach is generalized to other discretized groups in \(SE(2)\), \(SE(3)\), and \(E(3)\)(Hoogeboom et al., 2018; Winkels and Cohen, 2018; Worrall and Brostow, 2018; Chen et al., 2021). Steerable convolution is proposed in Cohen and Welling (2016), leveraging the irreducible representations to remove the need for discretization and facilitate equivariant convolution on continuous groups in the frequency domain (Worrall et al., 2017; Cohen et al., 2017; Weiler et al., 2018). Beyond convolutions, more general equivariant network architectures are proposed, for example, Fuchs et al. (2020); Hutchinson et al. (2021); Chatzipantzis et al. (2022) for transformers and Batzner et al. (2022); Brandstetter et al. (2021) for message passing networks. Vector Neurons (Deng et al., 2021) designs a multi-layer perception (MLP) and graph network that generalize the scalar features to 3D features to realize
Figure 1: Comparison between existing equivariant networks and our work. Red represents the underlying objects to be studied by the models. For example, a commonly studies type of object in existing equivariant networks is shapes. For our work, the studied object is transformations. Given a reference frame, we obtain an observation of the studied object illustrated in yellow. For existing equivariant networks, the inputs are represented as vectors (including tensors). For our work, the inputs are represented as matrices. Change of basis, illustrated in blue, acts on vectors by left multiplication while acting on transformations by conjugation.
\(SO(3)\)-equivariance on spatial data. The above works mainly focus on compact groups, on which more general recipes for building equivariant layers that are not limited to a specific group are also proposed (Kondor and Trivedi, 2018; Cohen et al., 2019; Weiler and Cesa, 2019; Xu et al., 2022; Lang and Weiler, 2020). The extension of equivariance beyond compact groups is also explored. Finzi et al. (2021) constructs MLPs equivariant to arbitrary matrix groups using their finite-dimensional representations. With the Monte Carlo estimator, equivariant convolutions are generalized to matrix groups with surjective exponential maps (Finzi et al., 2020) and all finite-dimensional Lie groups (MacDonald et al., 2022), where Lie algebra is used to parameterize elements in the continuous Lie groups as a lifted domain from the input space.
Our model structure resembles the MLP style of Vector Neurons (Deng et al., 2021), but our work models the equivariance of arbitrary semisimple groups under adjoint actions. Lie algebra is the input space of our network, representing transformation data.
## 3 Preliminaries
In this section, we provide some preliminaries for Lie groups. Specifically, we focus on matrix Lie groups. For more detailed explanations, we refer the readers to Hall (2013); Rossmann (2006); Kirillov (2008).
### Lie Group and Lie Algebra
A Lie group \(\mathcal{G}\) is a smooth manifold whose elements satisfy the group axioms. Because of this, a special vector space naturally arises at the identity of every Lie group named the Lie algebra, denoted \(\mathfrak{g}\). A Lie algebra locally captures the structure of the Lie group. It is possible to move from a Lie group to a Lie algebra around the identity and vice versa. This is achieved by employing the exponential and log maps:
\[\text{Exp}:\quad\mathfrak{g}\to\mathcal{G},\quad X\mapsto\text{ Exp}(X) \tag{1}\] \[\text{Log}:\quad\mathcal{G}\to\mathfrak{g},\quad a\mapsto\text{ Log}(a). \tag{2}\]
For a matrix Lie group, the exponential map is the matrix exponential, and the log map is the matrix logarithm. In addition to the exponential map, every Lie algebra is equipped with an asymmetric binary operator called the Lie bracket:
\[[\cdot,\,]:\quad\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}. \tag{3}\]
The elements of the Lie algebra have non-trivial structures. However, since the Lie algebra is a vector space, for a Lie algebra of dimension \(m\), we can always represent it using \(\mathbb{R}^{m}\) given appropriate bases. As a result, we introduce two useful functions:
\[\text{Vee}:\quad\mathfrak{g}\to\mathbb{R}^{m},\quad x^{\wedge} \mapsto(x^{\wedge})^{\vee}=\sum_{i=1}^{m}x_{i}e_{i} \tag{4}\] \[\text{Hat}:\quad\mathbb{R}^{m}\to\mathfrak{g},\quad x\mapsto x^{ \wedge}=\sum_{i=1}^{m}x_{i}E_{i}, \tag{5}\]
where \(e_{i}\) are the canonical basis of \(\mathbb{R}^{m}\) and \(E_{i}=(e_{i})^{\wedge}\in\mathfrak{g}\). Using the Hat and Vee maps, we can represent an element of the Lie algebra in a neural network using \(\mathbb{R}^{m}\), while performing structure-preserving operations on \(\mathfrak{g}\).
### Adjoint Representation
Given an element of the Lie algebra \(X\in\mathfrak{g}\) and its corresponding Lie group \(\mathcal{G}\), every \(a\in\mathcal{G}\) defines an automorphism of the Lie algebra \(Ad_{a}:\mathfrak{g}\to\mathfrak{g}\) by
\[Ad_{a}(X)=aXa^{-1}. \tag{6}\]
This is called the adjoint action of the group \(\mathcal{G}\) on the Lie algebra \(\mathfrak{g}\). It represents the change of basis operations on the algebra. Since the adjoint \(Ad_{a}\) is linear, we can find a matrix that maps the \(\mathbb{R}^{m}\) representation of the Lie algebra to another. That is, for every \(Ad_{a}\) and \(X\in\mathfrak{g}\), we have
\[Adm_{a}:\quad\mathbb{R}^{m}\to\mathbb{R}^{m},\quad x\mapsto Adm_{a}x, \tag{7}\]
with \(Adm_{a}\in\mathbb{R}^{m\times m}\), \(x^{\wedge}=X\) and \(Adm_{a}x=(ax^{\wedge}a^{-1})^{\vee}\). This is an important property as it allows us to model the group adjoint action using a matrix multiplication on \(\mathbb{R}^{m}\), which enables the adjoint equivariant layer design in Section 4.
Conversely, if we view the \(Ad\) as a function of a group element \(a\in\mathcal{G}\), it maps the group element to a Lie algebra automorphism:
\[Ad:\mathcal{G}\rightarrow\mathtt{Aut}(\mathfrak{g}),\quad a\mapsto Ad_{a}. \tag{8}\]
This \(Ad\) is called the _adjoint representation_ of the group. Similarly, we can obtain the adjoint representation of the Lie algebra by differentiating the adjoint representation of the group at the identity:
\[ad:\mathfrak{g}\rightarrow\mathtt{Der}(\mathfrak{g}),\quad X\mapsto ad_{X}( \cdot)=[X,\cdot]\,, \tag{9}\]
where \(\mathtt{Der}(\mathfrak{g})\) is the Lie algebra of \(\mathtt{Aut}(\mathfrak{g})\), and \([\cdot,\cdot]\) is the Lie bracket of the Lie algebra. For a matrix group, the Lie bracket is defined by the commutator: \([X,Y]=XY-YX\). It is worth noticing that the Lie bracket is equivariant under the group adjoint action.
### Killing Form
If a Lie algebra \(\mathfrak{g}\) is of finite dimension and associated with a field \(\mathbb{R}\), a symmetric bilinear form called the _Killing form_ is defined as:
\[B(X,Y):\mathfrak{g}\times\mathfrak{g}\rightarrow\mathbb{R},\quad(X,Y) \mapsto\mathrm{tr}(ad_{X}\circ ad_{Y}) \tag{10}\]
**Definition 1**: _A bilinear form \(B(X,Y)\) is said to be non-degenerate iff \(B(X,Y)=0\) for all \(Y\in\mathfrak{g}\) implies \(X=0\)._
**Theorem 1** (Kirillov (2008)): _A Lie algebra is semisimple iff the Killing form is non-degenerate.1_
Footnote 1: This is also known as the _Cartan’s Criterion_.
**Theorem 2** (Kirillov (2008)): _The Killing form is invariant under the group adjoint action \(Ad_{a}\) for all \(a\in\mathcal{G}\), i.e.,_
\[B(Ad_{a}\circ X,Ad_{a}\circ Y)=B(X,Y).\]
If the Lie group is also compact, the Killing form is negative definite, and the inner product naturally arises from the negative of the Killing form.
## 4 Methodology
We present Lie Neurons (LN), a general adjoint-equivariant neural network on Lie algebras. It is greatly inspired by Vector Neurons (VN) (Deng et al., 2021). The Lie Neurons generalize the 3-dimensional \(SO(3)\) equivariant VN to a \(K\)-dimensional network that is equivariant by construction for any semisimple Lie algebra. Different from the VN, the Lie Neurons take elements of a Lie algebra as inputs, and they capture the equivariance of a Lie group under the adjoint action. We will discuss how the Lie Neuron almost specializes to the VN later in Section 4.5. Here, we start by describing the network structure of Lie Neurons.
The standard multilayer perceptron (MLP) networks are constructed with scalar neurons, \(z\in\mathbb{R}\). For each layer, the neurons are concatenated in the feature dimension into a \(C^{(d)}\)-dim vector \(\mathbf{z}\in\mathbb{R}^{C^{(d)}}\), where \(d\) denotes the layer index. Vector Neurons lift the neuron representation from scalars to \(\mathbb{R}^{3}\). For an input composed of a set of \(N\) points (e.g., a point cloud), the features learned from a VN layer are of shape \(\mathbb{R}^{3\times C^{(d)}\times N}\).
The Lie Neurons generalize Vector Neurons. Each Lie Neuron, \(X\in\mathfrak{g}\), is an element of a semisimple Lie algebra. A Lie algebra is a vector space with non-trivial structures. However, using (4), we can express the neuron in \(x=X^{\vee}\in\mathbb{R}^{K}\) with appropriate bases. Similar to Vector Neurons, the features learned from a Lie Neuron layer are \(\mathcal{X}^{(d)}=\{\mathbf{x}_{i}\}_{i=1}^{N}\in\mathbb{R}^{K\times C^{(d)} \times N}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{K\times C^{(d)}}\).
By construction, the LN are equivariant to the group adjoint action (also known as similarity transform in matrix groups). In particular, for a given element \(a\) in a semisimple Lie group \(\mathcal{G}\), we have
\[f(a\mathcal{X}a^{-1};\theta)=af(\mathcal{X};\theta)a^{-1}, \tag{11}\]
where \(f\) is the function defined by an LN model with parameters \(\theta\). 2 The Lie Neurons framework consists of a linear layer, two nonlinear activation layers, a pooling layer, and an invariant layer. We start by discussing the linear layers as follows.
Footnote 2: We slightly abuse the notation here by setting \(a\mathcal{X}a^{-1}=\{\{(a(x_{ij})^{\wedge}a^{-1})^{\vee}\}_{i=1}^{C^{(d)}}\}_{j=1}^ {N}\).
### Linear Layers
Linear layers are the basic building blocks of an MLP. A linear layer has a learnable weight matrix \(\mathbf{W}\in\mathbb{R}^{C\times C^{\prime}}\), which operates on input features \(\mathbf{x}\in\mathbb{R}^{K\times C}\) by left matrix multiplication:
\[\mathbf{x}^{\prime}=f_{\text{LN-Linear}}(\mathbf{x};\mathbf{W})=\mathbf{x} \mathbf{W}\in\mathbb{R}^{K\times C^{\prime}}. \tag{12}\]
Recall (7), if we use the vector representation \(x\in\mathbb{R}^{K}\) for \(X\in\mathfrak{g}\), we can always find a linear adjoint matrix \(Adm_{a}\in\mathbb{R}^{K\times K}\)such that \(Adm_{a}x=aXa^{-1}\). As a result, the adjoint action on the linear layer becomes:
\[\begin{split} f_{\text{LN-Linear}}(Ad_{a}(\mathbf{x});\mathbf{W })&=f_{\text{LN-Linear}}(Adm_{a}\mathbf{x};\mathbf{W})\\ &=Adm_{a}\mathbf{x}\mathbf{W}\in\mathbb{R}^{K\times C^{\prime}} \\ &=Adm_{a}f_{\text{LN-Linear}}(\mathbf{x};\mathbf{W})\\ &=Ad_{a}(f_{\text{LN-Linear}}(\mathbf{x};\mathbf{W})),\end{split} \tag{13}\]
which proves the equivariant property of the linear layer. It is worth mentioning that we ignore the bias term to preserve the equivariance. Lastly, similar to the Vector Neurons, the weights may or may not be shared across the elements \(\mathbf{x}\) of \(\mathcal{X}\).
### Nonlinear Layers
Nonlinear layers enable the neural network to approximate complicated functions. We propose two designs for the equivariant nonlinear layers, LN-ReLU and LN-Bracket.
#### 4.2.1 LN-ReLU: Nonlinearity based on the Killing form
We can use an invariant function to construct an equivariant nonlinear layer. The VN leverages the inner product in a standard vector space, which is invariant to \(SO(3)\), to design a vector ReLU nonlinear layer. We generalize this idea by replacing the inner product with the negative of the Killing form. As described in Section 3, the negative Killing form falls back to the inner product for compact semisimple Lie groups, and it is invariant to the group adjoint action.
For an input \(\mathbf{x}\in\mathbb{R}^{K\times C}\), a Killing form \(B(\cdot,\cdot)\), and a learnable weight \(U\in\mathbb{R}^{C\times C}\), the nonlinear layer \(f_{\text{LN-ReLU}}\) is defined as:
\[f_{\text{LN-ReLU}}(\mathbf{x})=\begin{cases}\mathbf{x},&\text{if }B(x,d)\leq 0\\ \mathbf{x}+B(x,d)d,&\text{otherwise},\end{cases} \tag{14}\]
where \(d=\mathbf{x}U\in\mathbb{R}^{K\times C}\) is the learnable direction. Optionally, we can share the learned direction across channels by setting \(U\in\mathbb{R}^{C\times 1}\).
From Theorem 2, we know the Killing form is invariant under the group adjoint action, and the equivariance of the learned direction is proven in (13). Therefore, the second output of (14) becomes a linear combination of two equivariant quantities. As a result, the nonlinear layer is equivariant to the adjoint action.
We can also construct variants of ReLU, such as the leaky ReLU in the following form:
\[f_{\text{LN-LeakyReLU}}=\alpha\mathbf{x}+(1-\alpha)f_{\text{ReLU}}(\mathbf{x}). \tag{15}\]
#### 4.2.2 LN-Bracket: Nonlinearity based on the Lie bracket
As introduced in Section 3, Lie algebra is a vector space with an extra binary operator called Lie bracket, which is equivariant under group adjoint actions. For a matrix Lie group, the Lie bracket
of its Lie algebra is defined using the commutator: \([X,Y]=XY-YX\). We use this operation to build a novel nonlinear layer.
We use two learnable weight matrices \(U,V\in\mathbb{R}^{C\times C}\) to map the input to different Lie algebra vectors, \(\mathbf{u}=\mathbf{x}U,\mathbf{v}=\mathbf{x}V\). The Lie bracket of \(\mathbf{u}\) and \(\mathbf{v}\) becomes a nonlinear function on the input: \(\mathbf{x}\mapsto[(\mathbf{x}U)^{\wedge},(\mathbf{x}V)^{\wedge}]\). Theoretically, we can directly use it as our nonlinear layer. However, we note that the Lie bracket essentially captures _the failure of matrices to commute_. (Guggenheimer, 2012), and that \([X,X]=0,\forall X\). We find that when using two learnable Lie algebra elements from the same input, the Lie bracket cancels out most of the information and only passes through the residual. As a result, we add a residual path to enhance the information flow, inspired by ResNet (He et al., 2016). The final design of the LN-Bracket layer becomes:
\[f_{\text{LN-Bracket}}(\mathbf{x})=\mathbf{x}+[(\mathbf{x}U)^{\wedge},(\mathbf{ x}V)^{\wedge}]^{\vee}. \tag{16}\]
The nonlinear layer is often combined with a linear layer to form a module. In the rest of the paper, we will use LN-LR to denote an LN-Linear followed by an LN-ReLU, and LN-LB to denote an LN-Linear with an LN-Bracket layer.
### Pooling Layers
Pooling layers provide a means to aggregate global information across the \(N\) observation points within one measurement. This can be done by mean pooling, which is adjoint equivariant. In addition, we also introduce a max pooling layer. For each input \(\mathcal{X}=\{\mathbf{x}_{n}\}_{n=1}^{N}\in\mathbb{R}^{K\times C\times N}\), and a weight matrix \(\mathbf{W}\in\mathbb{R}^{C\times C}\), we learn a set of directions as: \(\mathcal{D}=\{\mathbf{x}_{n}\mathbf{W}\}_{n=1}^{N}\).
We again employ the Killing form, \(B(\cdot,\cdot)\), as the invariant function. For each channel \(c\in C\), we have the max pooling function as \(f_{\text{LN-Max}}(\mathbf{x}_{n})\left[c\right]=\mathbf{x}_{n^{*}}\left[c\right]\), where
\[n^{*}(c)=\operatorname*{arg\,max}_{n}\;B(\mathbf{x}_{n}\left[c\right]\mathbf{W },\mathbf{x}_{n}\left[c\right]). \tag{17}\]
\(\mathbf{x}_{n}\left[c\right]\) denotes the input at channel \(c\). Max pooling reduces the feature set from \(\mathcal{X}\in\mathbb{R}^{K\times C\times N}\) to \(\mathcal{X}\in\mathbb{R}^{K\times C\times 1}\). The layer is equivariant to the adjoint action due to the invariance of \(B(\cdot,\cdot)\).
### Invariant Layers
Equivariant layers allow steerable feature learning. However, some applications demand invariant features Lin et al. (2023); Zheng et al. (2022); Li et al. (2021). We introduce an invariant layer that can be attached to the network when necessary. Given an input \(\mathbf{x}\in\mathbb{R}^{K\times C}\), we have:
\[f_{\text{LN-law}}(\mathbf{x})=B(\mathbf{x},\mathbf{x}), \tag{18}\]
where \(B(\cdot,\cdot)\) is the adjoint-invariant Killing form, and \(f_{\text{LN-law}}(\mathbf{x})\in\mathbb{R}^{1\times C}\).
### Relationship to Vector Neurons
Our method can almost specialize to the Vector Neurons when working with \(\mathfrak{so}(3)\). This is because the linear adjoint matrix \(Adm_{a}\) is exactly the rotation matrix for \(\mathfrak{so}(3)\). Therefore, the group adjoint action becomes a left multiplication on the \(\mathbb{R}^{3}\) representation of the Lie algebra. Moreover, \(SO(3)\) is a compact group. Thus, the negative Killing form of \(\mathfrak{so}(3)\) defines an inner product. We would like to note that we omit the normalization in the ReLU layer because the norm is not well defined when the Killing form is not negative definite. We also do not implement a batch normalization layer. Therefore, the Lie Neurons do not reduce to VN completely when working with \(\mathfrak{so}(3)\). Despite the similarity in appearance, the \(\mathbb{R}^{3}\) vectors are viewed as vectors in Euclidean space subject to rotation matrix multiplication in Vector Neurons, while they are treated as \(\mathfrak{so}(3)\) Lie algebras in our framework, where the rotation matrices stand for conjugation. In addition, we propose a novel Lie bracket nonlinear layer.
## 5 Experiments
The Lie Neurons can be applied to any semisimple Lie algebra with a real matrix representation. This allows us to extend the network to operate on noncompact Lie algebras, such as the special linear Lie algebra \(\mathfrak{sl}(3)\). In this section, we instantiate the LN on \(\mathfrak{sl}(3)\), which can be represented using traceless matrices. The corresponding group, the special linear group \(SL(3)\), can be represented
using matrices with unit determinants. The special linear group has 8 degrees of freedom and can be used to model the homography transformation between images Hua et al. (2020); Zhan et al. (2022).
We perform three different experiments to verify the LN. We first solve a regression problem on an invariant function, where the function maps two \(\mathfrak{sl}(3)\) elements to a real number. In the second experiment, we fit an equivariant function that maps from \(\mathfrak{sl}(3)\) to \(\mathfrak{sl}(3)\). Lastly, we formulate a classification problem, where we classify among three Platonic solids. Across all three experiments, we compare our method with a standard 3-layer MLP by flattening the input to \(\mathbb{R}^{K\times C\times N}\). In addition, we set the feature dimension to 256 for all models.
### Invariant Function Regression
We begin our evaluation with an invariant function fitting experiment. Given \(X,Y\in\mathfrak{sl}(3)\), we ask the network to regress the following function:
\[g(X,Y)=\sin(\operatorname{tr}(XY))+\cos(\operatorname{tr}(YY))-\frac{ \operatorname{tr}(YY)^{3}}{2}+\det(XY)+\exp(\operatorname{tr}(XX)). \tag{19}\]
We randomly generate \(10,000\) training samples and \(10,000\) testing samples. In addition, in order to evaluate the invariance of the learned network, we randomly apply \(500\) group adjoint action to each test sample to generate augmented testing data.
In this task, we experiment with three different modules, LN-LR, LN-LB, and LN-LR + LN-LB, each followed by an LN-Inv and a final linear mapping from the feature dimension to a scalar. For each input, we concatenate \(X\) and \(Y\) in the feature dimension and have \(\mathcal{X}\in\mathbb{R}^{K\times C\times N}=\mathbb{R}^{8\times 2\times 1}\).
We report the Mean Squared Error (MSE) and the invariance error in Table 1. The invariance error \(E_{\text{inv}}\) is defined as:
\[E_{\text{inv}}:=\frac{\sum_{i=1}^{N_{x}}\sum_{j=1}^{N_{a}}f(\mathcal{X}_{i})-f (a_{j}\mathcal{X}_{i}a_{j}^{-1})}{N_{x}N_{a}}, \tag{20}\]
where \(a\in SL(3)\) are the randomly generated adjoint actions, \(N_{x}\) is the number of testing points, and \(N_{a}\) is the number of conjugations. The invariance error measures the adjoint invariance property of the networks.
From the table, we see that the LN outperforms MLP except for LN-LB. When tested on the \(SL(3)\) augmented test set, the performance of the LN remains consistent, while the error from the MLP increases significantly. The results of the invariance error demonstrate that the proposed method is invariant to the adjoint action while the MLP is not. In this experiment, we observe that LN-LR performs well on the invariant task, but the LN-LB alone does not. Nevertheless, if we combine both nonlinearities, the performance remains competitive.
### Equivariant Function Regression
In the second experiment, we ask the network to fit an equivariant function that takes two elements on \(\mathfrak{sl}(3)\) back to itself:
\[h(X,Y)=\left[\left[X,Y\right],Y\right]+\left[Y,X\right]. \tag{21}\]
Similar to the first experiment, we generate \(10,000\) training and test samples, as well as the additional \(500\) adjoint actions on the test set. We again report the MSE on the regular test set. For the adjoint-augmented test set, we map the output back with the inverse adjoint action and compute the
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline & MLP & LN-LR & LN-LB & LN-LR + LN-LB \\ \hline MSE \(Id/Id\)\(\downarrow\) & 0.143 & **0.103** & 0.558 & 0.115 \\ MSE \(Id/SL(3)\)\(\downarrow\) & 5.566 & **0.103** & 0.558 & 0.115 \\ Invariance Error \(\downarrow\) & 1.359 & 0.002 & \(\mathbf{4.9\times 10^{-5}}\) & \(9.0\times 10^{-4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The mean squared errors and the invariant errors on the invariant function regression task. \(Id/Id\) denotes models trained and evaluated both on non-augmented data. \(Id/SL(3)\) denotes testing with data augmented by \(SL(3)\) adjoint action. \(\downarrow\) means the lower the better.
MSE with the ground truth value. To evaluate the equivariance of the network, we compute the equivariance error \(E_{\text{equiv}}\) as:
\[E_{\text{equiv}}:=\frac{\sum_{i=1}^{N_{x}}\sum_{j=1}^{N_{a}}a_{j}f(\mathcal{X}_{ i})a_{j}^{-1}-f(a_{j}\mathcal{X}_{i}a_{j}^{-1})}{N_{x}N_{a}}. \tag{22}\]
In this experiment, we evaluate LN using 3 different architectures. They are 2 LN-LR, 2 LN-LB, and 2 LN-LR + 2 LN-LB, respectively. Each of them is followed by a regular linear layer to map the feature dimension back to \(1\).
Table 2 lists the results of the equivariant experiment. We see that the MLP performs well on the regular test set but fails to generalize to the augmented data. Moreover, it has a high equivariance error. Our methods, on the other hand, generalize well on the adjoint-augmented data and achieve the lowest errors. The 2 LN-LB model performs the best.
From both the invariant and equivariant experiments, we observe that the LN-LR module works better on invariant tasks, while the LN-LB module performs better on the equivariant ones. We speculate this is because the LN-LR relies on the Killing form, which is an adjoint-invariant function, while the LN-LB leverages the Lie bracket, which is adjoint-equivariant. Nevertheless, if we combine both modules, the network performs favorably on both invariant and equivariant tasks.
### Platonic Solid Classification
Other than the numerical experiments above, we further design an experiment with practical meanings to hint at the real-world implications of the proposed network. The task is to classify polyhedrons from their projection on an image plane. While rotation equivariance naturally emerges for the 3D shape, the rotation equivariance relation is lost in the 2D projection of the 3D polyhedrons. Instead, the projection yields homography relations, which can be modeled using the \(SL(3)\) group Hua et al. (2020); Zhan et al. (2022). When projected onto an image plane, the two neighboring faces of a polyhedron can be described using homography transformations, which are different for each polyhedron type. Therefore, we use the homography transforms among the projected neighboring faces as the input for polyhedron classification.
Without loss of generality, we assume the camera intrinsic matrix \(K\) to be identity. In this case, given a homography matrix \(H\in SL(3)\) that maps one face to another in the image plane, the homography between these two faces becomes \(RHR^{-1}\) when we rotate the camera by \(R\in SO(3)\subset SL(3)\).
In this experiment, we use three types of Platonic solids: a tetrahedron, an octahedron, and an icosahedron. An input data point refers to the homography transforms between the projection of a pair of neighboring faces within one image. Figure 2 visualizes an example of the neighboring face pair for the three Platonic solids. The homographies of all neighboring face pairs form a complete set of data describing a Platonic solid. We use these data to learn a classification model of the three Platonic solids. During training, we fix the camera and object pose. Then, we test with the original pose and with rotated camera poses to verify the equivariance property of our models.
We once again test with LN-LR, LN-LB, and LN-LR+LN-LB. Each of them is followed by an LN-Max layer, an LN-Inv layer, and a final linear mapping.
Table 3 shows the classification accuracy. The LN achieves higher accuracy than the MLP. Since the MLP is not invariant to the adjoint action, its accuracy drops drastically when the camera is rotated. We also notice that the LN-LB performs slightly worse than the other two formulations. This agrees with our previous observations, as the classification tasks rely mostly on invariant features.
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline & MLP & 2 LN-LR & 2 LN-LB & 2 LN-LR + 2 LN-LB \\ \hline MSE \(Id/Id\downarrow\) & 0.009 & 0.213 & \(\mathbf{9.6\times 10^{-10}}\) & \(2.2\times 10^{-6}\) \\ MSE \(Id/SL(3)\downarrow\) & 2.025 & 0.213 & \(\mathbf{4.5\times 10^{-8}}\) & \(2.2\times 10^{-6}\) \\ Equivariance Error \(\downarrow\) & 0.445 & \(1.0\times 10^{-4}\) & \(\mathbf{6.5\times 10^{-8}}\) & \(7.9\times 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The mean squared errors and the equivariant errors in equivariant function regression.
### Ablation Study
We introduce the LN-Bracket layer in Section 4.2.2 and discuss how the residual connection improves the performance. In this subsection, we perform ablation studies on an alternative Lie bracket nonlinear layer design without the residual connection. That is, \(f_{\text{LN-Bracket-N}}(\mathbf{x})=[(\mathbf{x}U)^{\wedge},(\mathbf{x}V^{ \wedge})^{\wedge}]^{\vee}\). We denote this nonlinear layer combined with an LN-Linear as LN-LBN and show the results of this method in Table 4. From the table, we can clearly see the benefits of having the residual connection in the Lie bracket layer.
## 6 Discussion and Limitations
Lie Neurons is a group adjoint equivariant network by construction. It does not require the Lie group to be compact. However, the LN-ReLU layer relies on a non-degenerated Killing form. As a result, the current formulation can only operate on semisimple Lie algebras. Secondly, the Lie Neurons take elements on the Lie algebra as inputs, but most modern sensors return measurements in standard vector spaces. More practical applications of the proposed work are yet to be explored. Lastly, although the Lie Neurons contain only simple layer structures, their scalability is yet to be verified.
## 7 Conclusion
In this paper, we propose an adjoint-equivariant network, Lie Neurons. Compared with existing equivariant models, our proposed framework extends the scope of equivariance by modeling the symmetry of change-of-basis on _transformations_, rather than vectors. Our model is generally applicable to any semisimple Lie groups, compact or non-compact. Our network builds upon simple MLP-style layers. To facilitate the learning of expressive Lie algebraic features, we propose equivariant nonlinear activation functions based on the Killing form and the Lie bracket. We also design an equivariant pooling layer and an invariant layer to extract global equivariant features and invariant features. In the experiments, we verify the equivariance property and the ability to fit equivariant and invariant targets of our model in regression and classification tasks, with \(\mathfrak{sl}(3)\) as an example. We believe the new paradigm of transformation feature learning could open new possibilities in both equivariant modeling and more general deep learning.
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline & MLP & LN-LR & LN-LB & LN-LR + LN-LB \\ \hline Accuracy \(\uparrow\) & 0.967 & 0.994 & 0.986 & **0.998** \\ Accuracy (Rotated) \(\uparrow\) & 0.385 & 0.994 & 0.979 & **0.997** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The accuracy of the Platonic solid classification task using the inter-face homography transforms in the image plane as inputs. \(\uparrow\) means the higher, the better.
Figure 2: A visualization of the three Platonic solids in our classification task. The yellow and blue colors highlight a neighboring pair of faces, between which the homography transforms in the image plane are taken as input to our models. |
2301.07769 | Reconstructing Rayleigh-Benard flows out of temperature-only
measurements using Physics-Informed Neural Networks | We investigate the capabilities of Physics-Informed Neural Networks (PINNs)
to reconstruct turbulent Rayleigh-Benard flows using only temperature
information. We perform a quantitative analysis of the quality of the
reconstructions at various amounts of low-passed-filtered information and
turbulent intensities. We compare our results with those obtained via nudging,
a classical equation-informed data assimilation technique. At low Rayleigh
numbers, PINNs are able to reconstruct with high precision, comparable to the
one achieved with nudging. At high Rayleigh numbers, PINNs outperform nudging
and are able to achieve satisfactory reconstruction of the velocity fields only
when data for temperature is provided with high spatial and temporal density.
When data becomes sparse, the PINNs performance worsens, not only in a
point-to-point error sense but also, and contrary to nudging, in a statistical
sense, as can be seen in the probability density functions and energy spectra. | Patricio Clark Di Leoni, Lokahith Agasthya, Michele Buzzicotti, Luca Biferale | 2023-01-18T20:24:15Z | http://arxiv.org/abs/2301.07769v1 | Reconstructing Rayleigh-Benard flows out of temperature-only measurements using Physics-Informed Neural Networks
###### Abstract
We investigate the capabilities of Physics-Informed Neural Networks (PINNs) to reconstruct turbulent Rayleigh-Benard flows using only temperature information. We perform a quantitative analysis of the quality of the reconstructions at various amounts of low-passed-filtered information and turbulent intensities. We compare our results with those obtained via nudging, a classical equation-informed data assimilation technique. At low Rayleigh numbers, PINNs are able to reconstruct with high precision, comparable to the one achieved with nudging. At high Rayleigh numbers, PINNs outperform nudging and are able to achieve satisfactory reconstruction of the velocity fields only when data for temperature is provided with high spatial and temporal density. When data becomes sparse, the PINNs performance worsens, not only in a point-to-point error sense but also, and contrary to nudging, in a statistical sense, as can be seen in the probability density functions and energy spectra.
keywords: keyword1, Keyword2, Keyword3, Keyword4
## 1 Introduction
Understanding the type and quantity of information needed to reconstruct the state of a physical system carries important implications for both its fundamental study and its real-world applications. In this work we analyze this question for the case of thermally driven flows. Thermally driven flows are at the core of several geophysical and industrial systems such as, atmospheric convection, [1; 2], oceanic convection [3], mantle convection [4] and pure-metal melting [5]. These flows can exhibit a wide variety of behaviors and structures, ranging from plume formation to fully developed turbulence [6]. It was first conjectured by Charney [7] that temperature measurements alone are enough to reconstruct the whole state of the atmosphere. This conjecture has been studied both theoretically and numerically in simple convective systems [8; 9], 3D Planetary Geostrophic models [10], and Rayleigh-Benard flows in non-turbulent regimes [11; 12], in the infinite Prandtl number limit [13], and at moderate and high Rayleigh
lows_
These studies showed the importance of setting a correct velocity prior to get a good reconstruction [11], and the fragility to get a time-independent full synchronization at high Rayleigh numbers [12; 14]. A deeper understanding of the Charney conjecture is then important not only to elucidate the interplay between velocity and temperature but also to improve current forecasts and Data Assimilation [15; 16] schemes. The aim of this paper is to continue the work presented in [14] in 2D turbulent Rayleigh-Benard flows. Whilst the original work used Nudging [17; 18; 19], a synchronization equations-based DA tool, to reconstruct the flows, we now use Physics-Informed Neural Networks (PINNs). PINNs are neural networks designed to approximate the solution of systems of partial differential equations [20]. They have been used in inverse problems with partial information [21; 22], to reconstruct turbulent flows out of measurements [23; 24; 25; 26], and to assimilate statistical data into synthetically generated fields [27]. For detailed reviews on PINNs, see [28; 29]. Using a flow coming from a Direct Numerical Simulation of the 2D Rayleigh-Benard system at two different Rayleigh numbers, one moderate and one high, we use PINNs to perform reconstructions using varying amounts of data, characterized by the separation distance between measuring probes. We show that PINNs are successful at the task, even at high Rayleigh numbers, where the correlations between the temperature field (for which we provide information) and the velocity fields (for which we do not) diminish.
The paper is organized as follows, in Sec. 2.1 we introduce the Rayleigh-Benard equations and describe the data generation procedures while in Sec. 2.3 we describe the PINN technique and how it is applied in this context. Results are presented in Sec. 3 and conclusions in Sec. 4.
## 2 Methods
### Rayleigh-Benard flows and data generation
Rayleigh-Benard convection consists of a planar horizontal layer of fluid that is heated from below with respect to gravity. If density fluctuations are small, the Boussinesq approximation may be employed and the fluid can be described in terms of an incompressible velocity field plus a temperature field. Taking \(\mathbf{u}=(u,v)\) to be the horizontal and vertical components of the velocity field, respectively, \(T\) the temperature and \(p\) the pressure, in a 2D geometry and with the average temperature set to zero and the density to unity the equations take the form
\[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla p+\nu\nabla^ {2}\mathbf{u}-\beta Tg\hat{z}, \tag{1}\] \[\frac{\partial T}{\partial t}+\mathbf{u}\cdot\nabla T=\kappa\nabla^ {2}T, \tag{2}\]
where \(\beta\) is the thermal expansion coefficient of the fluid, \(\nu\) is its kinematic viscosity, \(\kappa\) its thermal conductivity and \(g\hat{z}\) the acceleration due to gravity. The domain is \(L_{x}\) wide and \(L_{z}\) tall and has periodic boundary conditions in the horizontal direction \(\hat{x}\). At the top and bottom boundaries, the boundary conditions are
\[\begin{split} T(z=0)=T_{d},\qquad T(z=L_{z})=-T_{d},\\ \mathbf{u}(z=0)=\mathbf{u}(z=L_{z})=0.\end{split} \tag{3}\]
where \(T_{d}>0\). The characteristic velocity scale is given by \(u_{0}=\sqrt{gL_{z}\beta 2T_{d}}\) and the turnover time by \(\tau_{0}=2L_{z}/u_{0}\). Several dimensionless numbers can be used to describe Rayleigh-Benard flows. The Rayleigh number gives a measure of the ratio between buoyant and viscous forces and is given by
\[\text{Ra}=\text{g}\beta\frac{2\text{T}_{\text{d}}\text{L}_{\text{z}}^{3}}{ \nu\kappa}. \tag{4}\]
The Prandtl number is the ratio between momentum diffusivity and thermal diffusivity, namely
\[\text{Pr}=\frac{\nu}{\kappa}. \tag{5}\]
The Nusselt number measures the ratio of heat transfer due to convection versus that due to
Figure 1: Diagram of the PINN used.
conduction and is given by
\[\mathrm{Nu}=\frac{\langle\mathrm{vT}-\kappa\partial_{x}\mathrm{T}\rangle}{\kappa 2\mathrm{T}_{\mathrm{d}}/\mathrm{L}_{x}}, \tag{6}\]
where \(\langle.\rangle\) indicate the ensemble average over the whole domain. Finally, as an estimate of the size of the smallest scales of the system we define the Kolmogorov length scale \(\eta_{\kappa}=(\nu^{3}/\epsilon)^{1/4}\), where \(\epsilon=(\nu\kappa^{2}/L_{z}^{4})(\mathrm{Nu}-1)\mathrm{Ra}\) is the average rate of energy dissipation [30].
The reference, or ground-truth, flow of our numerical experiments was produced by evolving Eqs. (1)-(2) using a Lattice-Boltzmann method at two different Rayleigh numbers, \(\mathrm{Ra}_{1}=7.2\times 10^{7}\) and \(\mathrm{Ra}_{2}=36.3\times 10^{7}\). The reference flows are denoted by variables \(u^{r}\), \(v^{r}\), \(p^{r}\), and \(T^{r}\). Details on the numerical method can be found in [14]. In the first case \(L_{x}=864\delta\), \(L_{z}=432\delta\), \(T_{d}=2.5T_{0}\), and \(u_{0}=1.31U_{0}\), while in the second case \(L_{x}=1200\delta\), \(L_{z}=600\delta\), \(T_{d}=1.5T_{0}\), and \(u_{0}=2.12U_{0}\). In both cases \(\nu=6.67\times 10^{-4}\), \(\mathrm{Pr}=1\), \(\delta=1\), \(T_{0}=1/100\), \(U_{0}=1/100\), and \(\eta_{\kappa}\approx 2\delta\). In the first case \(\mathrm{Nu}\approx 25\), while in the second case \(\mathrm{Nu}\approx 39\). The flows were first allowed to reach a statistically stationary state and then data was extracted on a equally spaced rectangular grid of separation distance \(\ell\in[1:31]\delta\) at specific time intervals. In the first case, data was extracted over 8 separate time windows 11 snapshots long at a sampling rate of \(164/\tau_{0}\). In the second case, data was extracted over 10 separated time windows also 11 snapshots long but at a sampling rate of \(1130/\tau_{0}\), as this flow contains faster scales than the other one. Data was extracted over the whole spatial domain, except for the case with \(\ell=1\) at \(\mathrm{Ra}_{2}\), where the domain was split into four quadrants, thus necessitating four PINNs for the reconstruction of the whole domain. Each dataset \(\Omega_{d}\) is then identified by the Rayleigh number of the flow, the grid spacing \(\ell\), and the temporal window. The sets of collocation points \(\Omega_{p}\) where the physics part of the loss function is evaluated (see below) consist of all points where data part are evaluated, i.e. \(\Omega_{d}\), plus randomly selected points not necessarily lying on the \(\ell\)-spaced sampling grid. Note that as no data are used when evaluating the physics part of the loss function, we are not increasing the density of data used. The number of extra collocation points was 0, 0, 1, 2, 3, and 4 for every spatial position in \(\Omega_{d}\) for \(\ell/\delta=1,7,10,14,22\), and 31, respectively. To evaluate the PINNs' performances testing datasets \(\Omega_{t}\) consisting of the full fields, i.e., not subsampled in space, were used.
### Flow reconstruction using PINNs
Figure 1 shows a diagram of the PINN used in our experiments. The PINN takes coordinates \((x,z,t)\) as input and outputs fields \((u,v,p,T)\) at the specified coordinate. All PINNs presented here are five layers deep, have 100 hidden units in each layer and use ELU as activation functions. The loss function used to train them has the form
\[L=L_{d}+\lambda L_{p},\]
with the contribution given by the measured data is given by
\[L_{d}=\frac{1}{N(\Omega_{d})T_{0}^{2}}\sum_{j\in\Omega_{d}}|T(x_{j},z_{j},t_{ j})-T_{j}^{r}|^{2},\]
and the one given by the imposition of the correct equations of motion by:
\[L_{p}=\frac{\delta^{2}}{N(\Omega_{p})}\sum_{j\in\Omega_{p}} \left(\lambda_{u}U_{0}^{-4}f_{u}+\lambda_{T}T_{0}^{-2}U_{0}^{-2}f_{T}+\right.\] \[\left.\lambda_{i}U_{0}^{-2}f_{i}\right),\]
with
\[f_{u}=|\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}+\nabla p-\nu\nabla^{2}\bm {u}+\beta Tg\hat{z}|^{2},\]
\[f_{T}=|\partial_{t}T+\mathbf{u}\cdot\nabla T-\kappa\nabla^{2}T|^{2},\]
and
\[f_{i}=|\nabla\cdot\mathbf{u}|^{2},\]
and where \(\lambda\), \(\lambda_{u}\), \(\lambda_{T}\) and \(\lambda_{i}\) are extra hyperparameters set fixed to \(10^{5}\), \(1\), \(10^{-1}\) and \(10^{3}\), respectively, throughout. The subsets of points \(\Omega_{d}\) and \(\Omega_{p}\) where the summations are evaluated are explained in the section above, \(N(\Omega_{d})\) and \(N(\Omega_{p})\) denote the number of point in the respective datasets. The PINNs were trained using the Adam algorithm with a learning rate of \(10^{-4}\) for \(60000\) epochs.
### Flow reconstruction using Nudging
Nudging is an equation-informed data assimilation tool, using the evolution of the Navier-Stokes equations (1) supplemented by a Newton relaxation term proportional to \(\alpha(T_{(}x_{j},z_{j},t_{j})^{r}-T)\) in the temperature evolution. The resulting equations take the form
\[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}=-\nabla p+\nu\nabla^{2 }\mathbf{u}-\beta Tg\hat{z}, \tag{7}\] \[\frac{\partial T}{\partial t}+\mathbf{u}\cdot\nabla T=\kappa\nabla^{ 2}T-\alpha\mathcal{I}(T^{r}-T),\]
where \(\alpha\) is the amplitude of the nudging term and has units of frequency and \(\mathcal{I}\) is a filtering operator equal to 1 where the data is available, i.e. the measuring probes, and zero otherwise. The idea in nudging is to force the equation of motion to "follow" the available information where it is available and leave the equations of motion to refill the gaps in the whole space-time domain. For further details about the optimal selection for the nudging parameter \(\alpha\) and how to interpolate in time and space the supplied data see [14]. In contrast with PINNs, Nudging naturally enforces all physical constraints everywhere, at the cost of evolving the partial differential equations on the entire domain.
### Error assessment
In order to assess the reconstructions obtained, we define the point-to-point error field for temperature and velocity as:
\[T_{\Delta}(\mathbf{r},t)=T^{r}(\mathbf{r},t)-T(\mathbf{r},t);\] \[v_{\Delta}(\mathbf{r},t)=v^{r}(\mathbf{r},t)-v(\mathbf{r},t). \tag{8}\]
Global normalized errors are then given by
\[\Delta_{T}=\frac{\langle T_{\Delta}^{2}(\mathbf{r},t)\rangle}{\langle T^{2}(\mathbf{r },t)\rangle};\hskip 28.452756pt\Delta_{v}=\frac{\langle v_{\Delta}^{2}(\mathbf{r},t )\rangle}{\langle v^{2}(\mathbf{r},t)\rangle}, \tag{9}\]
where \(\langle\cdot\rangle\) indicates the average over the entire domain. It is important to stress that no information on the temperature at the boundary was provided (similarly to what implemented for nudging).
The scale-by-scale analysis is performed by analyzing the energy spectra:
\[E_{f}(k)=\langle|\hat{f}(k,z_{0},t)|^{2}\rangle_{t}, \tag{10}\]
where \(f\) is the field studied (either \(v\), \(v_{\Delta}\), \(T\), or \(T_{\Delta}\)), \(\hat{f}(k,z_{0},t)\) are the Fourier coefficients of \(f\) calculated along the horizontal direction at position \(z_{0}=L_{z}/2\) and time \(t\), and \(\langle,\rangle_{t}\) denotes the time average.
Figure 2: Visualizations of temperature (top) and vertical velocity (bottom) for the flow with \(\text{Ra}_{2}\). The left column shows the reference data, the other three columns show the reconstructions obtained with \(\ell/\delta=1\), 14 and 31. The locations of the measuring probes (corresponding to the case with \(\ell=14\delta\)) are marked with white dots on top of \(T^{r}\). All visualizations share the same colorbar.
## 3 Results
Figure 2 shows visualizations of the reference temperature \(T^{r}\) and vertical velocity \(v^{r}\) of the flow with \(\mathrm{Ra}_{2}\) at a given time and their PINN-reconstructed counterparts, \(T\) and \(v\), for three reconstructions, one performed with \(\ell/\delta=1\), one with \(\ell/\delta=14\) and one with \(\ell/\delta=31\). The location of the temperature measuring probes (corresponding to the \(\ell/\delta=14\) case) are marked with white dots. These visualizations not only show the characteristics of the flow, but also give a qualitative glimpse into the overall results: PINNs are able to reconstruct/infer the velocity field using only temperature information, even at high Rayleigh number, but with clear difficulties to capture small-scales fluctuations, as shown by the blurred velocity configurations (middle and right bottom panels). To further stress this point, we show the horizontal profile at \(z_{0}=300\delta\) in Fig. 3 with three different \(\ell/\delta=1,14,31\) for temperature (top), and velocity (bottom). As seen, the temperature reconstruction almost overlaps with the reference data for all cases, except the one with \(\ell/\delta=31\). The reconstructed velocity field, on the other hand, is able to correctly match the large scale structure of the reference flow, but is missing smaller scale features. Furthermore, the effects of not enforcing periodic boundary conditions along the horizontal direction can be clearly seen in the \(\ell/\delta=31\) case.
To put matters into quantitative terms, in Fig. 4 we show \(\Delta_{T}\) and \(\Delta_{v}\) obtained for the different \(\ell/\delta\) and the two Rayleigh numbers. The figures also show the results obtained via Nudging presented in [14]. As shown on the top panel of Fig. 4 for the case of the temperature field, PINNs and Nudging perform similarly. On the other hand, concerning the most interesting -and difficult- question of reconstructing the velocity field, in the bottom panel we show that PINNs outperform Nudging when the supplied temperature is dense enough in space \(\delta/\ell\sim 1\), while it is comparable with Nudging for sparse data, \(\delta/\ell<8\times 10^{-2}\).
In Fig. 5 (top) we show the temperature spectra at \(y_{0}=300\delta\) of the reference flow and of the reconstructions obtained with \(\ell=1\delta\), \(14\delta\), and \(31\delta\) for the high Rayleigh number case, while in the bottom panel we show the the spectra of the reference field superimposed with the spectra of the error \(T_{\Delta}\). A Hanning window was applied to the reconstructed fields to cure any effects of non-periodicity. The vertical dash-dotted red line marks where \(k\delta=14\) and the veritcal dotted green line where \(k\delta=31\). As expected from the previous analysis, the overall errors are small and mostly concentrated in the small scales, although in the \(\ell=14\delta\) and \(31\delta\) cases, these are still bigger than the scale set by \(\ell\). Figure 6 shows the corresponding spectra of the vertical velocity, with the addition of the result obtained via nudging [14] for \(\ell/\delta=14\). Here, only in the case with \(\ell/\delta=1\) the PINN produces a physically meaningful energy spectrum, while the spectra obtained via nudging is very similar to the reference one. In accordance with Figs. 2 and 3, the cases with higher \(\ell/\delta\) can only properly reconstruct the position and shape of the largest structures of the flow, but fail to produce the small-scale structures observed at these Rayleigh numbers. The energy spectra of the reconstruction thus decay very rapidly.
Finally, in Fig. 7(a) and (b) we show the probability density functions (excluding the regions close to the walls) of temperature and vertical velocity, respectively for the case with \(\mathrm{Ra}_{2}\). As expected, temperature statistics are well reproduced for all \(\ell\) presented, while velocity statistics are only close to accurate when \(\ell=1\), otherwise the probability density function start resembling a Gaussian.
## 4 Conclusions
Machine and deep learning techniques are becoming well-established tools in the Data Assimilation world. We show that it is possible to reconstruct Rayleigh-Benard velocity fields having access only to time-resolved point-wise temperature measurements using PINNs. We investigate two different Rayleigh numbers at \(7.2\times 10^{7}\) and \(3.6\times 10^{8}\). In order to assess the accuracy of PINNs we compare the results against a baseline given by Nudging [14]. PINNs achieve an accuracy comparable to Nudging for the smallest Rayleigh number and a better accuracy for highest Rayleigh case when the temperature data are supplied at high spatial frequencies. On the other hand, when temperature is too sparse, PINNs fail to produce meaningful results, while Nudging still produces physically valid solutions. We interpret this as a lack to enforce the correct physical constraints when data
are not dense enough. As seen in other works [26; 29], PINNs can struggle with high-frequency components, a major problem in multi-scale turbulent data where information is key also at small-scales and high-frequencies. It is important to remark that while the results presented can be considered promising, we do not claim they are optimal under any criteria, better results may be obtained with a different choice of hyperparameters, a slight modifications to the architecture [25], by enforcing periodicity in the horizontal direction via Fourier features [31], or by splitting the domain into smaller subsections [32]. It is out of the scope of this work to perform an exhaustive and meaningful hyperparameter scan. This work can be considered another exploratory attempt to systematic assess ML tools for reconstructing multi-scale turbulent fields, imposing physics constraints. Our intention here is to stress the importance to compare with other baselines (here Nudging is used), the need -in future works- to explore a wider range of Rayleigh (here limited to \([10^{7}:10^{8}]\)) and to jump to full \(3d+1\) space-time domains, the need to distinguish point-based reconstructions (here assessed in term of \(L_{2}\) norm, and spectral errors) from statistical reconstruction (here assessed with spectral and probability density functions). As far as this study shows, when data are sparse, PINNs performance worsens. Contrary to Nudging, the obtained reconstructed fields not only have a high point-to-point error but also have incorrect energy spectra and statistics.
## Declarations
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Figure 4: Global temperature errors \(\Delta_{T}\) (top) and global velocity errors \(\Delta_{v}\) (bottom) as a function of \(\delta/\ell\). Full markers denote the results for the case with \(\text{Ra}_{1}\), while empty markers denote the case with \(\text{Ra}_{2}\). The orange circular markers are the results obtained with PINNs, while the blue square markers are for the results obtained via nudging (extracted from [14]). Both figures share the same legend.
Figure 3: Temperature (top) and velocity (bottom) profiles for the case with \(\text{Ra}_{2}\). The reconstructions shown were obtained using \(\ell/\delta=1\), \(14\), and \(31\). Both figures share the same legend.
L.A. and P.C. conceived and carried out the numerical experiments, and analyzed the results. All authors worked on developing the main idea, discussed the results and contributed to the final manuscript.
This project has received partial funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No. 882340)).
|
2305.08813 | ReLU soothes the NTK condition number and accelerates optimization for
wide neural networks | Rectified linear unit (ReLU), as a non-linear activation function, is well
known to improve the expressivity of neural networks such that any continuous
function can be approximated to arbitrary precision by a sufficiently wide
neural network. In this work, we present another interesting and important
feature of ReLU activation function. We show that ReLU leads to: {\it better
separation} for similar data, and {\it better conditioning} of neural tangent
kernel (NTK), which are closely related. Comparing with linear neural networks,
we show that a ReLU activated wide neural network at random initialization has
a larger angle separation for similar data in the feature space of model
gradient, and has a smaller condition number for NTK. Note that, for a linear
neural network, the data separation and NTK condition number always remain the
same as in the case of a linear model. Furthermore, we show that a deeper ReLU
network (i.e., with more ReLU activation operations), has a smaller NTK
condition number than a shallower one. Our results imply that ReLU activation,
as well as the depth of ReLU network, helps improve the gradient descent
convergence rate, which is closely related to the NTK condition number. | Chaoyue Liu, Like Hui | 2023-05-15T17:22:26Z | http://arxiv.org/abs/2305.08813v1 | # ReLU soothes the NTK condition number and accelerates optimization for wide neural networks
###### Abstract
Rectified linear unit (ReLU), as a non-linear activation function, is well known to improve the expressivity of neural networks such that any continuous function can be approximated to arbitrary precision by a sufficiently wide neural network. In this work, we present another interesting and important feature of ReLU activation function. We show that ReLU leads to: _better separation_ for similar data, and _better conditioning_ of neural tangent kernel (NTK), which are closely related. Comparing with linear neural networks, we show that a ReLU activated wide neural network at random initialization has a larger angle separation for similar data in the feature space of model gradient, and has a smaller condition number for NTK. Note that, for a linear neural network, the data separation and NTK condition number always remain the same as in the case of a linear model. Furthermore, we show that a deeper ReLU network (i.e., with more ReLU activation operations), has a smaller NTK condition number than a shallower one. Our results imply that ReLU activation, as well as the depth of ReLU network, helps improve the gradient descent convergence rate, which is closely related to the NTK condition number.
## 1 Introduction
Non-linear activation functions, such as rectified linear unit (ReLU), are well known for their ability to increase the expressivity of neural networks. A non-linearly activated neural network can approximate any continuous function to arbitrary precision, as long as there are enough neurons in the hidden layers [12; 5; 11], while its linear counterpart - linear neural network, which has no non-linear activation functions applied, can only represent linear functions of the input. In addition, deeper neural networks, which have more non-linearly activated layers, have exponentially greater expressivity than shallower ones [32; 27; 29; 22; 33], indicating that the network depth promotes the power of non-linear activation functions.
In this paper, we present another interesting feature of ReLU activation function: ReLU improves data separation in the feature space of model gradient, and helps to decrease the condition number of neural tangent kernel (NTK) [14]. We also show that the depth of ReLU network further promotes this feature, namely, a deeper ReLU activated neural network has a better data separation and a smaller NTK condition number, than a shallower one.
Specifically, we first show the _better separation phenomenon_, i.e., the improved data separation for similar data in the model gradient feature space. We prove that, for an infinitely wide ReLU network \(f\) at its random initialization, a pair of data input vectors \(\mathbf{x}\) and \(\mathbf{z}\) that have similar directions (i.e., small angle \(\theta_{in}\) between \(\mathbf{x}\) and \(\mathbf{z}\)) become more separated in terms of their model gradient directions (i.e., angle \(\phi\) between \(\nabla f(\mathbf{x})\) and \(\nabla f(\mathbf{z})\) is larger than \(\theta_{in}\)). In addition, we show that a linear neural network \(\tilde{f}\), which is the same as the ReLU network but without activation functions, always keeps the model gradient angle \(\bar{\phi}\) the same as \(\theta_{in}\). With this comparison, we see that the ReLU activation
function results in a better data separation in the model gradient space. Furthermore, we show that deeper ReLU networks result in even better data separation.
We further show the _better conditioning_ property of ReLU, i.e., smaller NTK condition number, which is closely related to the better separation phenomenon. Specifically, we experimentally show that the NTK condition number for ReLU network of any depth is smaller than that of the data Gram matrix, which is equal to the NTK condition number for linear models and linear neural networks. Moreover, the NTK condition number monotonically decreases as the depth of ReLU network increases. We theoretically verify these findings for the cases of infinitely wide shallow ReLU network with arbitrary datasets and infinitely wide multi-layer ReLU network with dataset of size 2. The intuition is that, if there exists a pair of similar inputs \(\mathbf{x}\) and \(\mathbf{z}\) in the training set (i.e., the angle between \(\mathbf{x}\) and \(\mathbf{z}\) is small), which is usually the case for large datasets, then the Gram matrix and NTK of linear neural networks must have close-to-zero smallest eigenvalues, resulting in extremely large NTK condition numbers. The ReLU activation function make these similar data more separated (enlarges the small angles between data), hence it helps to increase the smallest eigenvalues of NTK, which in turn leads to a smaller NTK condition number.
**Condition number and optimization theory.** Recent optimization theories showed that the NTK condition number, or the smallest eigenvalue of NTK, controls the theoretical convergence rate of gradient descent algorithms on wide neural networks [8; 6; 20]. Combined with these theories, our findings imply that: (a), wide ReLU networks have faster convergence rate than linear models and linear neural networks, and (b), deeper wide ReLU networks have faster convergence rate than shallower ones. Experimentally, we indeed find that deeper ReLU networks converges faster than shallower ones.
Contributions.We summarize our contributions below. We find that:
* the ReLU activation function induces better separation between similar data in the feature space of model gradient. A larger depth of the ReLU network enhances this better separation phenomenon.
* ReLU activated neural networks have better NTK conditioning (i.e., smaller NTK condition number), than linear neural networks and linear models. A larger depth of the ReLU network further enhances this better NTK conditioning property.
* This better NTK conditioning property leads to faster convergence rate of gradient descent. We empirically verify this on various real world datasets.
The paper is organized as follow: in Section 2, we describe the setting and define the key quantities and concepts; in Section 3, we analyze linear neural networks as the baseline for comparison; in Section 4 and 5, we show our main results on the better separation and the better conditioning of ReLU neural networks, respectively; in Section 6, we discuss the connection between NTK condition number and convergence rates of gradient descent; in Section 7, we conclude the paper. The proofs of theorems and main corollaries can be found in the appendix.
### Related work
**NTK and its spectrum** have been extensively studied [18; 21; 9; 10; 4], since the discovery of constant NTK for infinitely wide neural networks [14]. For example, the NTK spectrum of an infinitely wide neural network is shown to be similar to that of Laplace kernel [10; 4], and can be computed [9]. Recent optimization theories on wide neural networks find connections between the theoretical convergence rate of gradient descent and the NTK condition number [8; 20].
**ReLU** has emerged to be the dominant choice of activation functions in neural networks used in practice, since [23; 16]. Since then, ReLU activated neural networks have received wide research attention, ranging from optimization [19; 8; 37], expressivity [11; 35; 33], generalization [36; 15; 3], etc. However, to the best of our knowledge, our work is the first to disclose the effect of ReLU on the NTK and its condition number, as well as on the convergence rate of gradient descent. In addition, we show the influence of network depth \(L\) on these features of ReLU.
We are aware of a prior work [2] which has results of similar flavor. It shows that the depth of a linear neural network may help to accelerate optimization via an implicit pre-conditioning of gradient descent. We note that this prior work is in an orthogonal direction, as its analysis is based on the
linear neural network, which is activation-free, while our work focus on the better-conditioning effect of ReLU activation function.
## 2 Setup and Preliminaries
Notations for general purpose.We denote the set \(\{1,2,\cdots,n\}\) by \([n]\). We use bold lowercase letters, e.g., \(\mathbf{v}\), to denote vectors, and capital letters, e.g., \(A\), to denote matrices. Given a vector, \(\|\cdot\|\) denotes its Euclidean norm. Inner product between two vectors is denoted by \(\langle\cdot,\cdot\rangle\). Given a matrix \(A\), we denote its \(i\)-th row by \(A_{i:}\), its \(j\)-th column by \(A_{:j}\), and its entry at \(i\)-th row and \(j\)-th column by \(A_{ij}\). We also denote the expectation (over a distribution) of a variable by \(\mathbb{E}[\cdot]\), and the probability of an event by \(\mathbb{P}[\cdot]\). For a model \(f(\mathbf{w};\mathbf{x})\) which has parameters \(\mathbf{w}\) and takes \(\mathbf{x}\) as input, we use \(\nabla f\) to denote its first derivative w.r.t. the parameters \(\mathbf{w}\), i.e., \(\nabla f:=\partial f/\partial\mathbf{w}\).
(Fully-connected) ReLU neural network.Let \(\mathbf{x}\in\mathbb{R}^{d}\) be the input, \(m_{l}\) be the width (i.e., number of neurons) of the \(l\)-th layer, \(W^{(l)}\in\mathbb{R}^{m_{l}\times m_{l-1}}\), \(l\in[L+1]\), be the matrix of the parameters at layer \(l\), and \(\sigma(z)=\max\{0,z\}\) be the ReLU activation function. A (fully-connected) ReLU neural network \(f\), with \(L\) hidden layers, is defined as:
\[\alpha^{(0)}(\mathbf{x})=\mathbf{x} \tag{1}\] \[\alpha^{(l)}(\mathbf{x})=\frac{\sqrt{2}}{\sqrt{m_{l}}}\sigma \left(W^{(l)}\alpha^{(l-1)}(\mathbf{x})\right),\ \ \forall l\in\{1,2,\cdots,L\},\] \[f(\mathbf{x})=W^{(L+1)}\alpha^{(L)}(\mathbf{x}).\]
We also denote \(\tilde{\alpha}^{(l)}(\mathbf{x})\triangleq\frac{\sqrt{2}}{\sqrt{m_{l}}}W^{(l )}\alpha^{(l-1)}(\mathbf{x})\). Following the NTK initialization scheme [14], these parameters are randomly initialized i.i.d. according to the normal distribution \(\mathcal{N}(0,1)\). The scaling factor \(\sqrt{2}/\sqrt{m_{l}}\) is introduced to normalize the hidden neurons [7]. We denote the collection of all the parameters by \(\mathbf{w}\).
In this paper, we typically set the layer widths as
\[m_{0}=d,\ \ m_{L+1}=1,\ \ and\ m_{l}=m,\ \ for\ l\in[L]. \tag{2}\]
and call \(m\) as the network width. We focus on the infinite network width limit, \(m\rightarrow\infty\). We also define the network depth \(L\) as the number of hidden layers.
Linear neural network.For a comparison purpose, we also consider a linear neural network \(\bar{f}\), which is the same as the ReLU neural network \(f\) (defined above), except that the activation function is the identity function \(\sigma(z)=z\) and that the scaling factor is \(1/\sqrt{m}\) (we adopt the network width setting in Eq.(2)):
\[\bar{\alpha}^{(0)}(\mathbf{x})=\mathbf{x},\ \ \bar{\alpha}^{(l)}(\mathbf{x})=\frac{1}{ \sqrt{m}}W^{(l)}\bar{\alpha}^{(l-1)}(\mathbf{x}),\ \ \forall l\in\{1,2,\cdots,L\},\ \bar{f}(\mathbf{x})=W^{(L+1)}\bar{\alpha}^{(L)}( \mathbf{x}). \tag{3}\]
Input feature and Gram matrix.Given a dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), we denote its (input) feature matrix by \(X\), where each row \(X_{i}\). \(=\mathbf{x}_{i}^{T}\). The Gram matrix is defined as \(G=XX^{T}\in\mathbb{R}^{d\times d}\), with each \(G_{ij}=\mathbf{x}_{i}^{T}\mathbf{x}_{j}\).
Gradient feature and neural tangent kernel (NTK).Given a model \(f\) (e.g., a neural network) with parameters \(\mathbf{w}\), we consider the vector \(\nabla f(\mathbf{w};\mathbf{x})\) is the gradient feature for the input \(\mathbf{x}\). The NTK \(\mathcal{K}\) is defined as
\[\mathcal{K}(\mathbf{w};\mathbf{x}_{1},\mathbf{x}_{2})=\langle\nabla f(\mathbf{ w};\mathbf{x}_{1}),\nabla f(\mathbf{w};\mathbf{x}_{2})\rangle, \tag{4}\]
where \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) are two arbitrary network inputs. For a given dataset \(\mathcal{D}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), there is a gradient feature matrix \(F\) such that each row \(F_{i}\).\((\mathbf{w})=\nabla f(\mathbf{w};\mathbf{x}_{i})\) for all \(i\in[n]\). The \(n\times n\) NTK matrix \(K(\mathbf{w})\) is defined such that its entry \(K_{ij}(\mathbf{w})\), \(i,j\in[n]\), is \(\mathcal{K}(\mathbf{w};\mathbf{x}_{i},\mathbf{x}_{j})\). It is easy to see that the NTK matrix
\[K(\mathbf{w})=F(\mathbf{w})F(\mathbf{w})^{T}. \tag{5}\]
Note that the NTK for a linear model reduces to the Gram matrix \(G\).
Recent discovery is that, when \(m\) is sufficiently large or infinite, the NTK and gradient feature becomes almost constant during training by gradient descent [14, 21]. Hence, it suffices to analyze these quantities only at the network initialization, which shall extend to all the optimization procedure.
Condition number.The _condition number_\(\kappa\) of a positive definite matrix \(A\) is defined as the ratio between its maximum eigenvalue and minimum eigenvalue:
\[\kappa=\lambda_{max}(A)/\lambda_{min}(A). \tag{6}\]
Embedding angle and model gradient angle.For a specific input \(\mathbf{x}\), we call the vector \(\alpha^{(l)}(\mathbf{x})\) as the \(l\)-embedding of \(\mathbf{x}\). We also call \(\nabla f\), i.e., the derivative of model \(f\) with respect to all its parameters, as the model gradient. In the following analysis, we frequently use the following concepts:
**Definition 2.1** (embedding angle and model gradient angle).: _Given two arbitrary inputs \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\), define the \(l\)-embedding angle, \(\theta^{(l)}(\mathbf{x},\mathbf{z})\triangleq\arccos\left(\frac{\langle\alpha ^{(l)}(\mathbf{x}),\alpha^{(l)}(\mathbf{z})\rangle}{\|\alpha^{(l)}(\mathbf{x })\|\|\alpha^{(l)}(\mathbf{z})\|}\right)\), as the angle between the \(l\)-embedding vectors \(\alpha^{(l)}(\mathbf{x})\) and \(\alpha^{(l)}(\mathbf{z})\), and the model gradient angle, \(\phi(\mathbf{x},\mathbf{z})\triangleq\arccos\left(\frac{\langle\nabla f( \mathbf{x}),\nabla f(\mathbf{x})\rangle}{\|\nabla f(\mathbf{x})\rangle\|\| \nabla f(\mathbf{x})(\mathbf{z})\|}\right)\), as the angle between the model gradient vectors \(\nabla f(\mathbf{x})\) and \(\nabla f(\mathbf{z})\)._
We also denote \(\theta^{(0)}\) by \(\theta_{in}\), as \(\theta^{(0)}\) is just the angle between the original inputs.
Connection between condition number and model gradient angle.The smallest eigenvalue and condition number of NTK are closely related to the smallest model gradient angle \(\min_{i,j\in[n]}\phi(\mathbf{x}_{i},\mathbf{x}_{j})\), through the gradient feature matrix \(F\). Think about the case if \(\phi(\mathbf{x}_{i},\mathbf{x}_{j})=0\) (i.e., \(\nabla f(\mathbf{x}_{i})\) is parallel to \(\nabla f(\mathbf{x}_{j})\)) for some \(i,j\in[n]\), then \(F\), hence NTK \(K\), is not full rank and the smallest eigenvalue \(\lambda_{min}(K)\) is zero, leading to an infinite condition number \(\kappa\). Similarly, if \(\min_{i,j\in[n]}\phi(\mathbf{x}_{i},\mathbf{x}_{j})\) is small, the smallest eigenvalue \(\lambda_{min}(K)\) is also small, and condition number \(\kappa\) is large, as stated in the following proposition (see proof in Appendix B).
**Proposition 2.2**.: _Consider a \(n\times n\) positive definite matrix \(A=BB^{T}\), where matrix \(B\in\mathbb{R}^{n\times d}\), with \(d>n\), is of full row rank. Suppose that there exist \(i,j\in[n]\) such that the angle \(\phi\) between vectors \(B_{i}\). and \(B_{j}\). is small, i.e., \(\phi\ll 1\), and that there exist constant \(C>c>0\) such that \(c\leq\|B_{k}\cdot\|\leq C\) for all \(k\in[n]\). Then, the smallest eigenvalue \(\lambda_{min}(A)=O(\phi^{2})\), and the condition number \(\kappa=\Omega(1/\phi^{2})\)._
Hence, a good data angle separation in the model gradient features, i.e., \(\min_{i,j\in[n]}\phi(\mathbf{x}_{i},\mathbf{x}_{j})\) not too small, is a necessary condition such that the condition number \(\kappa\) is not too large. Given this connection, in the following section we first discuss the separation of gradient features through the model gradient angles.
In the rest of the paper, we specifically refer the NTK matrix, NTK condition number, \(l\)-embedding angle and model gradient angle for the ReLU neural network as \(K\), \(\kappa\), \(\theta^{(l)}\) and \(\phi\), respectively, and refer their linear neural network counterparts as \(\bar{K}\), \(\bar{\kappa}\), \(\bar{\theta}^{(l)}\) and \(\bar{\phi}\), respectively. We also denote the condition number of Gram matrix \(G\) by \(\kappa_{0}\).
## 3 Linear neural network: the baseline for comparison
In this section, we analyze the linear neural network \(\bar{f}\), the linear counterpart of a ReLU network. Note that the only difference between the two architectures is the absence of the ReLU activation function in the linear neural network.
**Theorem 3.1**.: _Consider the linear neural network \(\bar{f}\) as defined in Eq.(3). In the limit of infinite network width \(m\to\infty\) and at network initialization \(\mathbf{w}_{0}\), the following relations hold:_
* _for any input_ \(\mathbf{x}\in\mathbb{R}^{d}\)_:_ \(\|\bar{\alpha}^{(l)}(\mathbf{x})\|=\|\mathbf{x}\|\)_,_ \(\forall l\in[L]\)_; and_ \(\|\nabla f(\mathbf{w}_{0};\mathbf{x})\|=(L+1)\|\mathbf{x}\|\)_._
* _for any inputs_ \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\)_:_ \(\bar{\theta}^{(l)}(\mathbf{x},\mathbf{z})=\theta_{in}(\mathbf{x},\mathbf{z})\)_,_ \(\forall l\in[L]\)_; and_ \(\bar{\phi}(\mathbf{x},\mathbf{z})=\theta_{in}(\mathbf{x},\mathbf{z})\)_._
This theorem states that, without a non-linear activation function, both the feature embedding maps \(\alpha^{(l)}:\mathbf{x}\mapsto\alpha^{(l)}(\mathbf{x})\) and the model gradient map \(\nabla f:\mathbf{x}\mapsto\nabla f(\mathbf{x})\) fail to change the geometrical relationship between any data samples. For any input pairs, the embedding angles \(\bar{\theta}^{(l)}\) and \(\bar{\phi}\) remain the same as the input angle \(\theta_{in}\). Therefore, it is not surprising that the NTK of a linear network is the same as the Gram matrix (up to a constant factor), as formally stated in the following corollary.
**Corollary 3.2** (NTK condition number of linear networks).: _Consider a linear neural network \(\bar{f}\) as defined in Eq.(3). In the limit of infinite network width \(m\to\infty\) and at network initialization, the NTK matrix \(\bar{K}=(L+1)^{2}G\). Moreover, \(\bar{\kappa}=\kappa_{0}\)._
This corollary tells that, for a linear neural network, regardless of its depth \(L\), the NTK condition number \(\bar{\kappa}\) is always equal to the condition number \(\kappa_{0}\) of the Gram matrix \(G\).
## 4 ReLU network has better data separation in model gradient space
In this section, we show that the ReLU non-linearity helps data separation in the model gradient space. Specifically, for two arbitrary inputs \(\mathbf{x}\) and \(\mathbf{z}\) with small \(\theta_{in}(\mathbf{x},\mathbf{z})\), we show that the model gradient angle \(\phi(\mathbf{x},\mathbf{z})\) is strictly larger than \(\theta_{in}(\mathbf{x},\mathbf{z})\), implying a better angle separation of the two data points in the model gradient space. Moreover, we show that the model gradient angle \(\phi(\mathbf{x},\mathbf{z})\) monotonically increases with the number of layers \(L\), indicating that deeper network (more ReLU non-linearity) has better angle separation.
Embedding vectors and embedding angles.We start with investigating the relations among the \(l\)-embedding vectors \(\alpha^{(l)}\) and the embedding angles \(\theta^{(l)}\).
**Theorem 4.1**.: _Consider the ReLU network \(f\) defined in Eq.(1) at its initialization. In the infinite network width limit \(m\to\infty\), for all \(l\in[L]\), the following relations hold:_
* _for any input_ \(\mathbf{x}\in\mathbb{R}^{d}\)_,_ \(\|\alpha^{(l)}(\mathbf{x})\|=\|\alpha^{(l-1)}(\mathbf{x})\|\)_;_
* _for any two inputs_ \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\)_,_ \[\cos\theta^{(l)}(\mathbf{x},\mathbf{z})=\frac{\pi-\theta^{(l-1)}(\mathbf{x}, \mathbf{z})}{\pi}\cos\theta^{(l-1)}(\mathbf{x},\mathbf{z})+\frac{1}{\pi}\sin \theta^{(l-1)}(\mathbf{x},\mathbf{z}).\] (7)
The theorem states that, during forward propagation, the \(l\)-embedding vectors for each input keeps unchanged in magnitude, and the embedding angles \(\theta^{(l)}\) between any two inputs obey a closed form relation from layer to layer. Therefore, we have the following corollary.
**Corollary 4.2**.: _Consider the same ReLU neural network \(f\) as in Theorem 4.1. For all \(l\in[L]\), the \(l\)-embedding \(\alpha^{(l)}\) satisfies:_
* _For any input_ \(\mathbf{x}\in\mathbb{R}^{d}\)_,_ \(\|\alpha^{(l)}(\mathbf{x})\|=\|\mathbf{x}\|\)_;_
* _Define function_ \(g:[0,\pi)\to[0,\pi)\) _as_ \[g(z)=\arccos\left(\frac{\pi-z}{\pi}\cos z+\frac{1}{\pi}\sin z\right),\] (8) _and let_ \(g^{l}(\cdot)\) _be the_ \(l\)_-fold composition of_ \(g(\cdot)\)_. Given any two inputs_ \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\)_,_ \[\theta^{(l)}(\mathbf{x},\mathbf{z})=g^{l}\left(\theta^{(0)}(\mathbf{x}, \mathbf{z})\right).\] (9)
We see that the the embedding angles between inputs are governed by the function \(g\) defined in Eq.(8). Please see Appendix A for the plot of the function and detailed discussion about its properties. As a highlight, \(g\) has the following property: \(g\) is approximately the identity function \(g(z)\approx z\) for small \(z\), i.e., \(z\ll 1\). This property directly implies the following corollary.
**Corollary 4.3**.: _Given any inputs \(\mathbf{x},\mathbf{z}\) such that \(\theta_{in}(\mathbf{x},\mathbf{z})=o(1/L)\), for each \(l\in[L]\), the \(l\)-embedding angle \(\theta^{(l)}(\mathbf{x},\mathbf{z})\) can be expressed as_
\[\theta^{(l)}(\mathbf{x},\mathbf{z})=\theta_{in}(\mathbf{x},\mathbf{z})-\frac{l }{3\pi}(\theta_{in}(\mathbf{x},\mathbf{z}))^{2}+o\left((\theta_{in}(\mathbf{x },\mathbf{z}))^{2}\right).\]
We see that, at the small angle regime \(\theta_{in}=o(1/L)\), the embedding angles \(\theta^{(l)}\) at any layer \(l\) is the same as the input angle \(\theta_{in}\) at the lowest order. In addition, the higher order corrections are always negative making \(\theta^{(l)}<\theta_{in}\). We also note that the correction term \(\Delta\theta^{(l)}\triangleq\theta^{(l)}-\theta_{in}\) is linearly dependent on layer \(l\) at its lowest order.
Model gradient angle.Now, we investigate the model gradient angle \(\phi\) and its relation with the embedding angles \(\theta^{(l)}\) and input angle \(\theta_{in}\), for the ReLU network.
**Theorem 4.4**.: _Consider the ReLU network defined in Eq.(1) with \(L\) hidden layers and infinite network width \(m\). Given two arbitrary inputs \(\mathbf{x}\) and \(\mathbf{z}\), the angle \(\phi(\mathbf{x},\mathbf{z})\) between the model gradients \(\nabla f(\mathbf{x})\) and \(\nabla f(\mathbf{z})\) satisfies_
\[\cos\phi(\mathbf{x},\mathbf{z})=\frac{1}{L+1}\sum_{l=0}^{L}\left[\cos\theta^{ (l)}(\mathbf{x},\mathbf{z})\prod_{l^{\prime}=l}^{L-1}(1-\theta^{(l^{\prime})}( \mathbf{x},\mathbf{z})/\pi)\right]. \tag{10}\]
_Moreover, \(\|\nabla f(\mathbf{x})\|=(L+1)\|\mathbf{x}\|\), for any \(\mathbf{x}\)._
Better data separation with ReLU.Comparing with Theorem 3.1 for linear neural networks, we see that the non-linear ReLU activation only affects the relative direction, but not the the magnitude, of the model gradient. Combining Theorem 4.4 with Corollary 4.2, we get the relation between \(\phi\) and the input angle \(\theta_{in}\). Figure 1 plots \(\phi\) as a function of \(\theta_{in}\) for different network depth \(L\).
The **key observation** is that: for relatively small input angles (say, \(\theta_{in}<60^{\circ}\)), the model gradient angle \(\phi\) is always greater than the input angle \(\theta_{in}\). This suggests that, after the mapping \(\nabla f:\mathbf{x}\mapsto\nabla f(\mathbf{x})\) from the input space to model gradient space, data inputs becomes more (directionally) separated, if they are similar in the input space (i.e., with small \(\theta_{in}\)). Comparing to the linear neural network case, where \(\bar{\phi}(\mathbf{x},\mathbf{z})=\theta_{in}(\mathbf{x},\mathbf{z})\) as in Theorem 3.1, we see that the ReLU non-linearity results in a better angle separation \(\phi(\mathbf{x},\mathbf{z})>\bar{\phi}(\mathbf{x},\mathbf{z})\) for similar data.
Another important observation is that: deeper ReLU networks lead to larger model gradient angles, when \(\theta_{in}<60^{\circ}\). This indicates that deeper ReLU networks, which has more layers of ReLU non-linear activation, makes the model gradient more separated between inputs. Note that, in the linear network case, the depth does not affect the gradient angle \(\bar{\phi}\).
We theoretically confirm these two observations in the regime of small input angle \(\theta_{in}=o(1/L)\), by the following theorem.
**Theorem 4.5** (Better separation with ReLU).: _Consider two network inputs \(\mathbf{x},\mathbf{z}\in\mathbb{R}^{d}\), with small input angle \(\theta_{in}(\mathbf{x},\mathbf{z})=o(1/L)\), and the ReLU network defined in Eq.(1) with \(L\) hidden layers and infinite network width \(m\). At the network initialization, the angle \(\phi(\mathbf{x},\mathbf{z})\) between the model gradients \(\nabla f(\mathbf{x})\) and \(\nabla f(\mathbf{z})\) satisfies_
\[\cos\phi(\mathbf{x},\mathbf{z})=\left(1-\frac{L}{2\pi}\theta_{in}+o(\theta_{ in})\right)\cos\theta_{in}. \tag{11}\]
Figure 1: **Model gradient angles \(\phi\) vs. input angle \(\theta_{in}\) (according to Theorem 4.4).** Linear neural networks, of any depth \(L\), always have \(\bar{\phi}=\theta_{in}\), as the black dash line showed. ReLU neural networks with various depths have better data separation \(\phi>\theta_{in}\) for similar data (i.e., small \(\theta_{in}\)). Moreover, deeper ReLU networks have better separation than shallow ones for similar data. All neural networks are infinitely wide.
Noticing the negative sign within the factor \(\left(1-\frac{1}{2\pi}\theta_{in}+o(\theta_{in})\right)\), we know that the factor is less than \(1\) and we obtain that: \(\phi(\mathbf{x},\mathbf{z})>\theta_{in}(\mathbf{x},\mathbf{z})=\phi(\mathbf{x}, \mathbf{z})\). Noticing the depth \(L\) dependence of this factor, we also get that: the deeper the ReLU network (i.e., larger \(L\)) is, the larger \(\phi\) is, in the regime \(\theta_{in}=o(1/L)\).
**Remark 4.6** (Separation in distance).: _Indeed, the better angle separation discussed above implies a better separation in Euclidean distance as well. This can be easily seen by recalling from Theorem 4.4 that the model gradient mapping \(\nabla f\) preserves the norm (up to a universal factor \(L+1\))._
We also point out that, Figure 1 indicates that for large input angles (say \(\theta_{in}>60^{\circ}\)) the model gradient angle \(\phi\) is always large (greater than \(60^{\circ}\)). Hence, non-similar data never become similar in the model gradient feature space.
## 5 Smaller condition number of NTK for ReLU network
In this section, we show both theoretically and experimentally that, with the ReLU non-linear activation function, a ReLU neural network has a smaller condition number \(\kappa\) of NTK than its linear counterpart - linear neural network. Moreover, for larger network depth \(L\), the NTK condition number \(\kappa\) of a ReLU neural network is generically smaller.
### Theoretical analysis
Theoretically, we show these findings for the following cases: two-layer ReLU network with any dataset; deep ReLU network with dataset of size \(2\). In both cases, we take the infinite width limit.
Shallow ReLU neural network.Now, let's look at the NTK of a shallow ReLU neural network
\[f(W;\mathbf{x})=\frac{\sqrt{2}}{\sqrt{m}}\mathbf{v}^{T}\sigma(W\mathbf{x}). \tag{12}\]
Here, we fix \(\mathbf{v}\) to random initialization and let \(W\) trainable.
With the presence of the ReLU activation function, the better data separation theorem (Theorem 4.5) together with the connection between condition number and angle separation ( Proposition 2.2) suggest that the NTK of this ReLU network has a smaller condition number \(\kappa\) than the Gram matrix, as well as the NTK of the linear neural network. The following theorem confirms this expectation.
**Theorem 5.1**.: _Consider the ReLU network in Eq.(12) in the limit \(m\to\infty\) and at initialization. The smallest eigenvalue \(\lambda_{min}(K)\) of its NTK is larger than that of the Gram matrix: \(\lambda_{min}(K)>\lambda_{min}(G)\). Moreover, the NTK condition number is less than that of the Gram matrix: \(\kappa<\kappa_{0}\)._
Deep ReLU neural network.For deep ReLU network, we consider the special case where the dataset \(\mathcal{D}\) consists two samples, with a small angle between their input vectors.
**Theorem 5.2**.: _Consider the ReLU neural network \(f\) as defined in Eq.(1) in the infinite width limit \(m\to\infty\) and at initialization. Consider the dataset \(\mathcal{D}=\{(\mathbf{x}_{1},y_{1}),(\mathbf{x}_{2},y_{2})\}\) with the input angle \(\theta_{in}\) between \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\) small, \(\theta_{in}=o(1/L)\). Then, the NTK condition number \(\kappa<\kappa_{0}\). Moreover, for two ReLU neural networks \(f_{1}\) of depth \(L_{1}\) and \(f_{2}\) of depth \(L_{2}\) with \(L_{1}>L_{2}\), we have \(\kappa_{f_{1}}<\kappa_{f_{2}}\)._
This theorem also shows that a ReLU network has a better conditioned NTK than a linear model or a linear neural network. Moreover, the depth of ReLU network enhances this better conditioning. These results also confirm that the ReLU activation helps to decrease the NTK condition number.
### Experimental evidence
In this subsection, we experimentally show that the phenomena of better data separation and better NTK conditioning widely happen in practice.
Dataset.We use the following datasets: synthetic dataset, MNIST [17], FashionMNIST (f-MNIST) [34], SVHN [24] and Librispeech [26]. The synthetic data consists of \(2000\) samples which are randomly drawn from a 5-dimensional Gaussian distribution with zero-mean and unit variance.
The MNIST, f-MNIST and SVHN datasets are image datasets where each input is an image. The Librispeech is a speech dataset including \(100\) hours of clean speeches. In the experiments, we use a subset of Librispeech with \(50,000\) samples, and each input is a 768-dimensional vector representing a frame of speech audio and we follow [13] for the feature extraction.
Models.For each of the datasets, we use a ReLU activated fully-connected neural network architecture to process. The ReLU network has \(L\) hidden layers, and has \(512\) neurons in each of its hidden layers. The ReLU network uses the NTK parameterization and initialization strategy (see [14]). For each dataset, we vary the network depth \(L\) from \(0\) to \(10\). Note that \(L=0\) corresponding to the linear model case. In addition, for comparison, we use a linear neural network, which has the same architecture with the ReLU network except the absence of activation function.
Results.For each of the experimental setting, we evaluate both the smallest pairwise model gradient angle \(\min_{i,j\in[n]}\phi(\mathbf{x}_{i},\mathbf{x}_{j})\) and the NTK condition number \(\kappa\), at the network initialization. We take \(5\) independent runs over \(5\) random initialization seeds, and report the average. The results are shown in Figure 2. As one can easily see from the plots, a ReLU network (depth \(L=1,2,\cdots,10\)) always have a better separation of data (i.e., larger smallest pairwise model gradient angle), and a better NTK conditioning (i.e., smaller NTK condition number), than its corresponding linear network (compare the solid line and dash line of the same color). Furthermore, the monotonically decreasing NTK condition number shows that a deeper ReLU network have a better conditioning of NTK.
## 6 Optimization acceleration
Recently studies showed strong connections between the NTK condition number and the theoretical convergence rate of gradient descent algorithms on wide neural networks [8; 6; 31; 1; 37; 25; 20]. In [8; 6], the authors derived the worse case convergence rates explicitly in terms of the smallest eigenvalue of NTK \(\lambda_{min}(K)\), \(L(\mathbf{w}_{t})\leq(1-\eta\lambda_{min}(K)/2)^{t}L(\mathbf{w}_{0})\), where \(L\) is the square loss function and \(t\) is the time stamp of the algorithm. Later on, in [20], the NTK condition number is explicitly involved in the theoretical convergence rate:
\[L(\mathbf{w}_{t})\leq(1-\kappa^{-1})^{t}L(\mathbf{w}_{0}). \tag{13}\]
Although \(\kappa\) is evaluated on the whole optimization path, all these theories used the fact that NTK is almost constant for wide neural networks and an evaluation at the initialization \(\mathbf{w}_{0}\) is enough.
Figure 2: **Better separation (left) and better NTK conditioning (right) of ReLU network.** Solid lines are for ReLU networks, and dash lines are for linear networks. **Left:** ReLU network works better in separating similar data, while linear network remains similar to a linear model. **Right:** ReLU network has better conditioning of NTK than linear network and linear model. Note that \(L=0\) (without any hidden layer) corresponds to the case of a linear model, and the NTK in this case is the Gram matrix.
As a smaller NTK condition number (or larger smallest eigenvalue of NTK) implies a faster theoretical convergence rate, our findings suggest that: (a), wide ReLU networks have faster convergence rate than linear models and linear neural networks, and (b), deeper wide ReLU networks have faster convergence rate than shallower ones.
We experimentally verify this implication on the convergence speed. Specifically, we train the ReLU networks, with depth \(L\) ranging from \(1\) to \(10\), for the datasets MNIST, f-MNIST and Librispeech. For all the training tasks, we use cross entropy loss as the objective function and use mini-batch stochastic gradient descent (SGD) of batch size \(500\) to optimize. For each task, we find its optimal learning rate by grid search. On MNIST and f-MNIST, we train \(500\) epochs, and on Librispeech, we training \(2000\) epochs.
The curves of training loss against epochs are shown in Figure 3. We observe that, for all these datasets, a deeper ReLU network always converges faster than shallower ones. This is consistent with the theoretical prediction that the deeper ReLU network, which has smaller NTK condition number, has faster theoretical convergence rate.
## 7 Conclusion and discussions
In this work, we showed the key role that ReLU (as a non-linear activation function) played in data separation: for any two inputs that are "directionally" close (small \(\theta_{in}\)) in the original input space, a ReLU activated neural network makes them better separated in the model gradient feature space (\(\phi>\theta_{in}\)), while a linear neural network, which has no non-linear activation, keeps the same separation (\(\bar{\phi}=\theta_{in}\)). As a consequence, the NTK condition number \(\kappa\) for the ReLU neural network is smaller than those for the linear model and linear neural networks. Moreover, we showed that the NTK condition number \(\kappa\) is further decreased if the ReLU network is going deeper. The smaller NTK condition number suggests a faster gradient descent convergence rate for deep ReLU neural network.
Finite network width.Our theoretical analysis is based the setting of infinite network width. We note that, for finite network width and at network initialization, the analysis only differs by a zero-mean noisy term, wherever we took the limit \(m\rightarrow\infty\). This noisy term scales down to \(0\) as the network width increases. We believe our main results still hold for finite but large network width.
Infinite depth.In this work, we focused on the finite depth scenario which is the more interesting case from a practical point of view. Our small angle regime analysis (Corollary 4.3, Theorem 4.5 and 5.2) do not directly extend to the infinite depth case. But, as Theorem 4.4 and Figure 1 indicate, the \(\phi(\theta_{in})\) function seems to converge to a step function when \(L\rightarrow\infty\), which implies orthogonality between each pair of model gradient vectors, hence a NTK condition number being \(1\). This is consistent with the prior knowledge that NTK converges to \(1\) in the infinite depth limit [28].
Other activation functions.In this work, we focused on ReLU, which is the mostly used activation function in practice. We believe similar results also hold for other non-linear activation functions. We leave this direction as a future work.
Figure 3: **Training curve of ReLU networks with different depths.** On each of these datasets, we see that deeper ReLU network always converges faster than shallower ones. |
2308.00525 | Transfer-Ensemble Learning based Deep Convolutional Neural Networks for
Diabetic Retinopathy Classification | This article aims to classify diabetic retinopathy (DR) disease into five
different classes using an ensemble approach based on two popular pre-trained
convolutional neural networks: VGG16 and Inception V3. The proposed model aims
to leverage the strengths of the two individual nets to enhance the
classification performance for diabetic retinopathy. The ensemble model
architecture involves freezing a portion of the layers in each pre-trained
model to utilize their learned representations effectively. Global average
pooling layers are added to transform the output feature maps into fixed-length
vectors. These vectors are then concatenated to form a consolidated
representation of the input image. The ensemble model is trained using a
dataset of diabetic retinopathy images (APTOS), divided into training and
validation sets. During the training process, the model learns to classify the
retinal images into the corresponding diabetic retinopathy classes.
Experimental results on the test set demonstrate the efficacy of the proposed
ensemble model for DR classification achieving an accuracy of 96.4%. | Susmita Ghosh, Abhiroop Chatterjee | 2023-08-01T13:07:39Z | http://arxiv.org/abs/2308.00525v1 | Transfer-Ensemble Learning based Deep Convolutional Neural Networks for Diabetic Retinopathy Classification
###### Abstract
This article aims to classify diabetic retinopathy (DR) disease into five different classes using an ensemble approach based on two popular pre-trained convolutional neural networks: VGG16 and Inception V3. The proposed model aims to leverage the strengths of the two individual nets to enhance the classification performance for diabetic retinopathy. The ensemble model architecture involves freezing a portion of the layers in each pre-trained model to utilize their learned representations effectively. Global average pooling layers are added to transform the output feature maps into fixed-length vectors. These vectors are then concatenated to form a consolidated representation of the input image. The ensemble model is trained using a dataset of diabetic retinopathy images (APTOS), divided into training and validation sets. During the training process, the model learns to classify the retinal images into the corresponding diabetic retinopathy classes. Experimental results on the test set demonstrate the efficacy of the proposed ensemble model for DR classification achieving an accuracy of 96.4%.
Diabetic Retinopathy, Inception V3, VGG16, APTOS dataset
## I Introduction
Diabetic retinopathy (DR) is a significant contributor to blindness in the working-age population globally. Timely and precise detection of DR is essential for timely intervention and effective disease management. Researchers have employed diverse methodologies, including neural networks, fuzzy sets, and nature-inspired computing, to improve the accuracy of tasks like object tracking, object segmentation, and object detection, making them applicable to computer vision applications [1-4]. Many scholars have made significant contributions to medical image analysis using the above said techniques, aiming to enhance disease diagnosis and detection. In recent years, deep neural nets [5] have shown remarkable advancements in various computer vision tasks, including medical image analysis. This research focuses on classifying diabetic retinopathy into five distinct categories using an ensemble of deep learning models. The ensemble consists of two well-known convolutional neural network (CNN) architectures, namely VGG16 [6] and Inception V3 [7], with a shared fully connected layer for classification. The objective is to leverage the strengths of both models to enhance the accuracy and robustness of diabetic retinopathy classification. For this study, we utilized the pre-trained VGG16 and Inception V3 models, both trained on the ImageNet [8] dataset, as the base models. To prevent overfitting and facilitate transfer learning, we froze a certain depth of layers in each model. The selection of frozen layers was based on a fraction of the total number of layers in each model, allowing them to retain their learned features while adapting to the specific task of diabetic retinopathy classification.
The input images, resized to 224x224 pixels, were fed into the ensemble model, and predictions were generated from both the VGG16 and Inception V3 models. By concatenating the predictions from these models, we aimed to capture a diverse range of features extracted by each network.
Experimental results demonstrate the effectiveness of our proposed ensemble approach for diabetic retinopathy classification, achieving higher accuracy by leveraging the strengths of both VGG16 and InceptionV3 nets.
To evaluate our model, we employed the APTOS 2019 [10] Blindness Detection Challenge dataset, a widely used benchmark dataset in diabetic retinopathy detection. This dataset comprises high-resolution retinal fundus images with corresponding diagnostic labels from expert ophthalmologists. The use of this dataset allowed us to train and evaluate our proposed methodology on diverse and representative samples.
We compared our results in terms of accuracy, precision, recall, and F1-score, providing insights into our newly designed model's performance compared to state-of-the-art techniques. Our findings demonstrate
Fig 1: General representation of a deep neural network [9].
the model's potential for real-world clinical applications.
The rest of this paper is organized as follows: Section 2 presents a review of related works in diabetic retinopathy detection and deep learning. In Section 3, we detail our methodology, including transfer learning with pre-trained VGG16 and Inception V3 models, freezing selected layers, and concatenating predictions. Section 4 covers the experimental setup, including the dataset and evaluation metrics. Next, Section 5 presents the experimental results and their analysis. Finally, Section 6 concludes the paper.
## II Related Research
Gulshan et al. [11] and Abramoff et al. [12] proposed a CNN model for automated screening of diabetic retinopathy. They trained deep neural networks using datasets of retinal fundus images. The CNN architecture consisted of multiple convolutional layers followed by max pooling layers to extract features from the images. The extracted features were then passed through fully connected layers for classification. Gargeya and Leng [13] presented a review of deep learning-based approaches for diabetic retinopathy screening. They discussed different CNN architectures used in the literature, including AlexNet, GoogLeNet, and ResNet, for diabetic retinopathy detection. Li et al. [14] proposed a multi-task deep learning framework for simultaneous detection and grading of diabetic retinopathy lesions. Their model effectively detected lesions such as microaneurys, hemorrhages, and exudates while classifying diabetic retinopathy severity levels. Das et al. [15] proposed a transfer learning approach for diabetic retinopathy detection using CNN models pre-trained on natural image datasets. Their method showed promising results even with a limited number of diabetic retinopathy images. Burlina et al. [16] presented a deep learning model for detecting and classifying retinal lesions related to diabetic retinopathy. Their system incorporated multiple CNN architectures to handle different lesion types, improving overall performance. Rajalingappaa et al. [17] introduced a deep learning model for diabetic retinopathy classification using retinal fundus images. Their approach utilized a combination of CNN and Long Short-Term Memory (LSTM) networks to capture spatial and temporal patterns.
This research focuses on diabetic retinopathy classification using an ensemble of deep learning models, VGG16 and Inception V3, with a shared fully connected layer. Both models are pretrained on ImageNet, and specific layers were frozen to prevent overfitting and enable transfer learning. By leveraging the strengths of both architectures, and combined feature extraction aims to enhance accuracy in classifying diabetic retinopathy into five distinct categories.
## III Methodology
In this work, we propose a transfer-ensemble based method for diabetic retinopathy classification. Our approach utilizes the power of pre-trained convolutional neural network (CNN) models, specifically VGG16 and Inception V3, to extract high-level features from retinal images. The methodology consists of several key steps to enhance the performance. The block diagram of the proposed method has been demonstrated in Fig. 2.
**(A) Transfer Learning:**
We utilize the pre-trained VGG16 and Inception V3 models, which have been trained on large-scale datasets, like the ImageNet. By utilizing these models, we can benefit from their learned feature representations that capture various visual patterns and structures and also require less computational power.
**(B) Model Freezing:**
To avoid overfitting and allow effective transfer learning, we chose to freeze a substantial portion of each pre-trained model. Approximately 25% of the
Fig 2: Flowchart of the proposed model
layers in VGG16 and Inception V3 were frozen, ensuring that the initial representations learned on ImageNet remain intact and generalizable to our diabetic retinopathy classification task.
**(C) Ensemble Learning:**
As mentioned, our methodology extracts the strengths of both the neural nets through ensemble learning. We obtained predictions from the frozen VGG16 and Inception V3 models for each input image and concatenated them. This enabled the ensemble model to capture diverse and complementary feature representations, leading to improved classification performance.
**(D) Architecture of the Model and Training Strategy:**
To enhance the classification performance, we introduced several key components in our methodology. First, after obtaining predictions from the frozen VGG16 and Inception V3 models, we concatenated them to capture diverse and complementary feature representations. Next, we applied Global Average Pooling to ensure compatibility between the models' outputs for seamless merging. Subsequently, a fully connected layer with 256 neurons and ReLU activation facilitated learning higher-level features and complex patterns. To prevent overfitting, we incorporated dropout regularization in the fully connected layer. Finally, the output layer consisted of a softmax activation function with five neurons, enabling confident and accurate predictions for the five classes of diabetic retinopathy severity levels. Through these strategic additions, our ensemble model demonstrates improved classification.
## IV Experimental Setup
As stated, experiment with the proposed neural net model is conducted using APTOS dataset. Details of this experimental setup have been described below in detail.
**(A) Dataset Used:**
The APTOS dataset consists of a large collection of high-resolution retinal fundus images along with corresponding diagnostic labels provided by expert ophthalmologists. A total of 6034 images were taken from 5 different classes. Table 1 shows the number of images used for different categories.
**(B) Image Preprocessing:**
Each image is resized to a dimension of 224x224 pixels and normalized by dividing the pixel values by 255. There are five classes in the image dataset: Mild DR, Moderate DR, No DR, Proliferate DR, and Severe DR. A sample image from each of the classes has been shown in Fig. 3.
We have also considered other performance indices e.g., precision, recall, F1-score, Top-1% error for comparison purposes. Confusion matrix is also considered.
**(D) Parameters Taken:**
Table 2 shows the parameters and their corresponding values used for our experimentation.
**(E) Model Training:**
The dataset is split into training and test sets. The split is performed with a test size of 20% and stratified sampling to maintain class balance.
## V Results
To evaluate the effectiveness of the proposed methodology, five different performance measuring criteria are used. These are: accuracy, precision, recall, and F1-score. The corresponding results are put in Table 3. Results are promising in nature in terms of various performance indices, yielding 96.4% accuracy for the ensemble model.
\begin{table}
\begin{tabular}{|c|c|} \hline Types of Classes & Number of Images \\ \hline Mild DR & 1624 \\ Moderate DR & 999 \\ No DR & 1805 \\ Proliferate DR & 772 \\ Severe DR & 834 \\ \hline
**Total Images** & **6034** \\ \hline \end{tabular}
\end{table}
Table 1: Images taken from APTOS 2019 dataset
\begin{table}
\begin{tabular}{|c|c|} \hline Parameters & Values \\ \hline Learning Rate & 0.0001 \\ Batch Size & 16 \\ Max Epochs & 40 \\ Optimizer & Adam \\ Loss Function & Categorical Cross-entropy \\ \hline \end{tabular}
\end{table}
Table 2: Experimental setup
\begin{table}
\begin{tabular}{|c|c|} \hline Metrics Used & Ensemble (rounded) \\ \hline Precision & 0.96 \\ Recall & 0.96 \\ F1-score & 0.96 \\ Accuracy (\%) & 96 \\ Top-1 error (\%) & 4.0 \\ \hline \end{tabular}
\end{table}
Table 3: Performance metrics on the APTOS dataset.
Figure 3: Images taken from APTOS dataset. (a) Mild DR, (b) Moderate DR, (c) No DR, (d) Proliferate DR, and (e) Severe DR
From Fig. 4 it is seen that both training and validation accuracy increases over epochs. At first we notice a steady increase in accuracy for both the validation and training sets. This suggests that the model is effectively learning from the data and making accurate predictions. Towards the end of training, the accuracies stabilize towards higher value. Since the pre-trained models were used, the weights were already updated toward the optimum values and hence the loss (both training and validation) are minimal from the very beginning, as seen in Fig. 5. This indicates that the model is becoming proficient at generalizing patterns and minimizing errors.
Each of the CNNs was trained and tested on the APTOS dataset, and the simulations were repeated 20 times to account for variations in the results. For each simulation, we randomly split the dataset into training and testing sets, maintaining the same split ratio for consistency across all simulations. After each simulation, we recorded the evaluation scores.
The accuracy values obtained using the two individual deep neural networks (VGG16 and Inception V3) and that for the proposed ensemble model are put in Table 4. Here the average accuracy values over 20 simulations are given. This result confirms the effectiveness of the ensemble approach as compared to the individual net.
For visual illustrations, Fig. 6 shows the predictions made by our fine-tuned ensemble model on some unseen images of APTOS, 2019 dataset.
\begin{table}
\begin{tabular}{|c|c|} \hline Models & Accuracy (\%) \\ \hline VGG16 & 92.6 \\ Inception V3 & 94.7 \\ Ensemble & **96.4** \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of our proposed ensemble model with individual models.
Figure 4: Variation of training and validation accuracy with epochs of the fine-tuned ensemble model.
Figure 5: Variation of training and validation loss with epochs of the fine-tuned ensemble model.
\begin{table}
\begin{tabular}{|c|c|} \hline Methods & Accuracy (\%) \\ \hline (Dondeti et al., 2020) [18] & _77.90_ \\ (Bodapati et al., 2020) [19] & 81.70 \\ (Liu et al., 2020) [20] & 86.34 \\ (Kassani et al., 2019) [21] & 83.09 \\ (Bodapati et al., 2021) [22] & 82.54 \\ (Sikder et al., 2021) [23] & 94.20 \\ (Alyoubi et al., 2021) [24] & 89.00 \\ Proposed Framework & **96.40** \\ \hline \end{tabular}
\end{table}
Table 5: Classification performance of the proposed method as to the state-of-the-art models for the APTOS dataset.
Figure 6: Class predictions on unseen images by the proposed ensemble method.
Performance of the proposed ensemble model is also compared with seven other state-of-the-art models and the results are shown in Table 5. These results also establish the superiority of the proposed ensemble model.
Finally, the confusion matrix obtained through experimentation is also shown in Fig. 7. This matrix also corroborates our earlier findings regarding the efficacy of the proposed approach.
## VI Conclusion
In conclusion, the present work proposes a transfer-ensemble based approach for enhanced diabetic retinopathy classification. By combining the complementary strengths of VGG16 and Inception V3 models, we achieved improved classification performance compared to individual nets. Our ensemble method, which concatenates the feature maps from VGG16 and Inception V3, demonstrated enhanced discriminative power in capturing diverse image patterns. Further research can explore combinations of other models and investigate ensemble methods' generalizability across different retinal diseases and imaging modalities. Overall, our study justifies the importance of ensemble learning in enhancing diagnostic accuracy. Moreover, the proposed transfer-ensemble approach holds promising potential to contribute to the development of computer-aided diagnosis systems for diabetic retinopathy diagnosis.
## Acknowledgement
A part of this work has been supported by the IDEAS - Institute of Data Engineering, Analytics and Science Foundation, The Technology Innovation Hub at the Indian Statistical Institute, Kolkata through sanctioning a Project No /IS/ITH/2022/55/ dtd. September 13, 2022.
|
2303.04431 | Safe Robot Learning in Assistive Devices through Neural Network Repair | Assistive robotic devices are a particularly promising field of application
for neural networks (NN) due to the need for personalization and hard-to-model
human-machine interaction dynamics. However, NN based estimators and
controllers may produce potentially unsafe outputs over previously unseen data
points. In this paper, we introduce an algorithm for updating NN control
policies to satisfy a given set of formal safety constraints, while also
optimizing the original loss function. Given a set of mixed-integer linear
constraints, we define the NN repair problem as a Mixed Integer Quadratic
Program (MIQP). In extensive experiments, we demonstrate the efficacy of our
repair method in generating safe policies for a lower-leg prosthesis. | Keyvan Majd, Geoffrey Clark, Tanmay Khandait, Siyu Zhou, Sriram Sankaranarayanan, Georgios Fainekos, Heni Ben Amor | 2023-03-08T08:14:22Z | http://arxiv.org/abs/2303.04431v1 | # Safe Robot Learning in Assistive Devices through Neural Network Repair
###### Abstract
Assistive robotic devices are a particularly promising field of application for neural networks (NN) due to the need for personalization and hard-to-model human-machine interaction dynamics. However, NN based estimators and controllers may produce potentially unsafe outputs over previously unseen data points. In this paper, we introduce an algorithm for updating NN control policies to satisfy a given set of formal safety constraints, while also optimizing the original loss function. Given a set of mixed-integer linear constraints, we define the NN repair problem as a Mixed Integer Quadratic Program (MIQP). In extensive experiments, we demonstrate the efficacy of our repair method in generating safe policies for a lower-leg prosthesis.
I 2022
## 1 Introduction
Robot learning has the potential to revolutionize the field of wearable robotic devices, such as prosthetics, orthoses and exoskeletons [1]. Often, such devices are designed in a "one size fits all" manner using models based on average population statistics. However, machine learning techniques can help adapt control parameters automatically to the wearer's individual characteristics, motion patterns or biomechanics. The result is a substantial improvement in ergonomic comfort and quality-of-life. Despite these benefits, the adoption of machine learning and, in particular, deep learning [2] in this field is still limited, largely due to safety concerns. In the case of a prosthesis, for example, it may be important to guarantee that the generated control values do not exceed a maximum threshold under a set of testing conditions. Other safety constraints include limits on velocities, joint angles and bounds on the change in control inputs.
In this paper, we derive an algorithm for training neural networks controllers to satisfy given safety specifications, in addition to fitting the given training/test data. In particular, we use our approach to derive controllers for a robotic lower-leg prosthesis that satisfy basic safety conditions, (see Fig. 1). Our proposed approach inputs a trained network (e.g., a network obtained using backpropagation on the training data) along with a specification that places restrictions on the possible outputs for a given set of inputs. It then generates a modified set of weights that obeys the desired safety constraints on the output using deterministic global optimization. We provide theoretical guarantees of optimality for this technique (assuming no numerical errors). Furthermore, in real-world robot experiments we show that the introduced methodology produces safe neural policies for a lower-leg prosthesis satisfying a variety of constraints.
Figure 1: A lower-leg prosthesis running a neural network for control. A neural network repair process ensures that safety constraints are satisfied.
Contributions.The proposed method can repair any layer in a network increasing the size of the solution space and the likelihood of a feasible successful repair. More importantly, it comes with theoretical guarantees on successfully repairing all discovered unsafe samples. When compared to retraining or fine-tuning methods, it also has two distinct benefits: (1) it does not require modification of the training data so as to satisfy the constraints, and (2) it does not utilize gradient optimization methods which do not guarantee constraint satisfaction even for the discovered unsafe data points. Finally, when compared to other property driven repair works, i.e., [3, 4, 5, 6, 7], it is applied for the first time to a real physical system, i.e., a powered prosthetic device, as opposed to a model.
Related Works.Learning based control for prosthetics is motivated by the challenges in modeling human-prosthesis dynamics, which exhibits time varying behavior. Several different approaches that utilize learning based methods for controlling prosthetic devices have been proposed in the literature [8, 9]. Nevertheless, this short review focuses only on neural network (NN) based methods. Gao et al. [10] provide a prosthetic controller based on recurrent neural networks, and show that this type of controller effectively minimizes the difference between desired and actual trajectories on a powered prosthetic device. In another direction, Keles and Yucesoy [11] and Vonsevych et al. [12] focus on utilizing electromyography (EMG) signals to predict control parameters for prosthetic ankles and hands. Our work follows a similar approach in predicting ankle control parameters with a neural network, which then drives the prosthesis through a PD controller. The verification of neural networks has been widely studied in order to check specifications for a given NN as a standalone component [13, 14, 15, 16] (for a survey see [17]). or as part of a closed loop system [18, 19, 20]. Testing techniques can produce counterexamples (adversarial samples) for a variety of NN-based applications, e.g., [21, 14, 22, 23, 24, 25]. One approach to ensuring that a NN satisfies given set of safety properties is to use retraining and fine-tuning based on counter-examples [26, 27, 7]. However, this approach has a number of pitfalls. First, the labels for the counterexamples need to be available. This may involve further data-collection, which is often cumbersome. Also, gradient descent optimization approaches cannot provide any guarantees that the result satisfies the provided constraints. Our approach, in contrast, avoids generating adversarial examples or performing gradient descent. The problem of "repairing" a network, which involves modifying weights of the network in a minimal manner to satisfy some safety properties, is thus more involved than simply retraining a network on some additional data involving counterexamples. Some methods in the literature [3, 4, 5, 6] make progress toward the goal of repairing NNs. Goldberger et al. [3] can only repair the output layer, which drastically reduces the space of possible successful repairs (and in many cases a repair is not even possible). The method of Fu and Li [4] produces patches for certain partitions of the input space wherein the corresponding outputs are modified. The direction of adding patches on the NN output is very promising; however, it is unclear how the approach will scale for high dimensional inputs, since it requires partitioning the input space into affine subregions. Authors in [6] repair the linear regions related to each faulty sample in the faulty network's weight space using a decoupled DNN architecture. However, this method causes the repaired network to be discontinuous. Therefore, it cannot be employed in robot learning and control applications which is the target application of this paper. This method is not also applicable for the networks with more than three inputs.
Figure 2: Overview of our approach. Left: a given (unsafe) neural network is trained to control a prosthesis. However, its outputs violate the formal safety constraints. Right: Using our NN Repair strategy, we identify an adjusted set of neural network weights that removes all violations while still maintaining the underlying behavior of the controller.
Problem Formulation
Without loss of generality, we motivate and discuss our approach using the task of learning safe robot controllers for a lower-leg prosthesis. The goal of this task is to learn a policy \(\pi_{\theta}\) which generates control values for the ankle angle of the powered prosthesis given a set of sensor values. Most critically, however, policy \(\pi_{\theta}\) is required to satisfy a set of safety constraints \(\Psi\). Fig. 2 provides an overview of our methodology in addressing this challenge. For now, we assume that an unsafe prior policy network may exist, see Fig. 2 (left). The network parameters \(\theta\) may be learned with through imitation learning [10], reinforcement learning [28] or any of the common machine learning paradigms. The policy may be optimized for task efficiency, e.g., stable and low-effort walking gaits, but may not yet satisfy any safety constraints. Our goal is to find an adjusted set of network parameters that satisfy any such constraint. We may now choose, for example, to restrict the control rate to specific bounds. Running the network on the input values of a validation data set reveals that violations occur at a number of time steps, as seen in Fig. 2. Our approach, called **NNRepLayer** (Layer-wise Neural Network Repair), takes the original network parameters \(\theta\) and the predicates \(\Psi\) and yields the updated parameters that generate no violations.
Notation.We denote the set of variables \(\{a_{1},a_{2},\cdots,a_{N}\}\) with \(\{a_{n}\}_{n=1}^{N}\). Let \(\pi_{\theta}\) be a network policy with \(L\) hidden layers. The nodes at each layer \(l\in\{l\}_{l=0}^{L}\) are represented by \(x^{l}\), where \(|x^{l}|\) denotes the dimension of layer \(l\) (\(x^{0}\) represents the input of network). The network's output is also denoted by \(y\) or \(\pi_{\theta}(x^{0})\). We consider fully connected policy networks with weight and bias terms \(\{(\theta_{w}^{l},\theta_{b}^{l})\}_{l=1}^{L+1}\). The training data set of \(N\) inputs \(x^{0}_{n}\) and target outputs \(t_{n}\) is denoted by \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\) sampled from the input-output space \(\mathcal{X}\times\mathcal{T}\subseteq\mathbb{R}^{|x^{0}|}\times\mathbb{R}^{|t|}\). We use \(x^{l}_{n}\) to denote the vector of nodes at layer \(l\) for sample \(n\). In this work, we focus on the policy networks with the Rectified Linear Unit (ReLU) activation function \(R(z)=\max\{0,z\}\). Thus, given the \(n^{\text{th}}\) sample, in the \(l^{\text{th}}\) hidden layer, we have \(x^{l}=R\left(\theta_{w}^{l}x^{l-1}+\theta_{b}^{l}\right)\). An activation function is not applied to the last layer, i.e. \(y=\theta_{w}^{L+1}x^{L}+\theta_{b}^{L+1}\).
Problem Statement (Repair Problem).Let \(\pi_{\theta}\) be a trained policy network over the training input-output space \(\mathcal{X}\times\mathcal{T}\subseteq\mathbb{R}^{|\theta|}\times\mathbb{R}^{|t|}\) and \(\Psi(y,x^{0})\) be a predicate on the output \(y\) of network for a set of inputs of interest \(x^{0}\in\mathcal{X}_{r}\subseteq\mathcal{X}\). The Repair Problem is to modify the weight \(\theta_{w}\) and bias terms \(\theta_{b}\) of \(\pi_{\theta}\) such that the repaired policy \(\pi_{\theta}\) satisfies \(\Psi(y,x^{0})\).
The repair of the policy network should not only satisfy the predicate \(\Psi(y,x^{0})\) but should also maintain the performance of original policy. To satisfy the latter, the method proposed in [3] ensures the satisfaction of predicate only with a minimal deviation of weights in the last layer. However, as we show latter in the experimental results, the repair of last layer is not necessarily feasible or sufficient to satisfy the predicates with a minimal deviation from the original parameters. Moreover, the minimal deviation from the original weights is not a sufficient guarantee to maintain the original performance of network. It is well-known that subtle changes in the weights may cause the network to significantly deviate from its original performance [29]. Therefore, it is important for the repaired policy \(\pi_{\theta_{r}}\) to also minimize the loss w.r.t. its original training data. We propose NNRepLayer (Layer-wise Neural Network Repair) that satisfies a predicate \(\Psi(y,x^{0})\) by repairing a specific layer of the policy network while minimizing the training loss.
## 3 NNRepLayer
We can formulate our framework as the minimization problem of the loss function \(E(\theta_{w},\theta_{b})\) subject to \((x^{0},t)\in\mathcal{X}\times\mathcal{T}\) and \(\Psi(y,x^{0})\) for \(x^{0}\in\mathcal{X}_{r}\). However, the resulting optimization problem is non-convex and difficult to solve due to the nonlinear ReLU activation function and high-order nonlinear constraints resulted from the multiplication of terms involving the weight/bias variables. In our approach, we obtain a sub-optimal solution by just focusing on repairing a single layer. We therefore modify the weight and bias terms of a single layer to adjust the predictions so as to minimize \(E(\theta_{w},\theta_{b})\) and to satisfy \(\Psi(y,x^{0})\). Thus, we solve the following problem
**Problem 1**.: _Let \(\pi_{\theta}\) denote a trained policy network with \(L\) hidden layers over the training input-output space \(\mathcal{X}\times\mathcal{T}\subseteq\mathbb{R}^{|x^{0}|}\times\mathbb{R}^{|t|}\) and \(\Psi(y,x^{0})\) denote a predicate representing constraints on the output \(y\) of \(\pi_{\theta}\) for the set of inputs of interest \(x^{0}\in\mathcal{X}_{r}\subseteq\mathcal{X}\). NNRepLayer modifies the weights of a
layer \(l\in\{1,\cdots,L+1\}\) in \(\pi_{\theta}\) such that the new network \(\pi_{\theta_{v}}\) satisfies \(\Psi(y,x^{0})\) while minimizing the loss of network \(E(\theta^{l}_{w},\theta^{l}_{b})\) with respect to its original training set._
Since \(\mathcal{X}_{r}\) and \(\mathcal{X}\) are not necessarily convex, we formulate NNRepLayer over a data set \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\sim\mathcal{X}\times\mathcal{T}\cup\mathcal{ X}_{r}\times\tilde{\mathcal{T}}\), where \(\tilde{\mathcal{T}}\) is the set of original target values of inputs in \(\mathcal{X}_{r}\). The predicate \(\Psi(x^{0},y)\) defined over \(x^{0}\in\mathcal{X}_{r}\) is not necessarily compatible with the target values in \(\tilde{\mathcal{T}}\). It means that the predicate may bound the NN output for \(\mathcal{X}_{r}\) input space such that not allowing an input \(x^{0}\in\mathcal{X}_{r}\) to reach its target value in \(\tilde{\mathcal{T}}\). It is a natural constraint in many applications. For instance, due to the safety constraints, we may not allow a NN controller to follow its original control reference for a given unsafe set of input states. For a given layer \(l\), we also define \(E(\theta^{l}_{w},\theta^{l}_{b})\) in the form of sum of square loss \(E(\theta^{l}_{w},\theta^{l}_{b})=\sum_{n=1}^{N}\lVert y_{n}(x^{0}_{n},\theta ^{l}_{w},\theta^{l}_{b})-t_{n}\rVert_{2}^{2}\), where \(\lVert\cdot\rVert_{2}\) denotes the Euclidean norm. Here, since we only repair the weight and bias terms of target layer \(l\), the loss term \(E\) is a function of \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\), respectively. Hence, the weight and bias terms of all layers except the target layer \(l\) are fixed. We define our optimization formulation as follows.
NNRepLayer Optimization Formulation.Let \(\pi_{\theta}\) be a neural network with \(L\) hidden layers, \(\Psi(y,x^{0})\) be a predicate, and \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\) be an input-output data set sampled from \((\mathcal{X}\times\mathcal{T})\cup(\mathcal{X}_{r}\times\tilde{\mathcal{T}})\) over the sets \(\mathcal{X}\), \(\mathcal{X}_{r}\), \(\mathcal{T}\), and \(\tilde{\mathcal{T}}\) all as defined in Problem 1. NNRepLayer minimizes the loss (1) by modifying \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\) subject to the constraints (2)-(5).
Here, constraint (2) represents the linear forward pass of network's last layer. Constraint (3) represents the forward pass of hidden layers starting from the layer \(l\). Except the weight and bias terms of the \(l^{\text{th}}\) layer, i.e. \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\), the weight and bias terms of the subsequent layers \(\{(\theta^{i}_{w},\theta^{i}_{b})\}_{i=l+1}^{L+1}\) are fixed. The sample values of \(x^{l-1}_{n}\) are obtained by the weighted sum of the nodes in its previous layers starting from \(x^{0}_{n}\) for all \(N\) samples \(\{n\}_{n=1}^{N}\). Each ReLU node \(x^{l}\) is formulated using Big-M formulation [30, 31] by \(x^{l}\geq\theta^{l}_{w}x^{l-1}_{n}+\theta^{l}_{b}\), \(x^{l}\leq\left(\theta^{l}_{w}x^{l-1}_{n}+\theta^{l}_{b}\right)-lb(1-\phi)\), and \(x^{l}\leq ub\;\phi\), where \(x^{l}\in[0,\infty)\), and \(\phi\in\{0,1\}\) determines the activation status of node \(x^{l}\). The bounds \(lb,ub\in\mathbb{R}\) are known as Big-M coefficients, \(\theta^{l}_{w}x^{l-1}_{n}+\theta^{l}_{b}\in[lb,ub]\), that need to be as tight as possible to improve the performance of MIQP solver. We used Interval Arithmetic (IA) Method [32, 14] to obtain tight bounds for ReLU nodes (read the supplementary materials, Sec. 27, for further details on IA). Constraint (4) is a given predicate on \(y\) defined over \(x^{0}\in\mathcal{X}_{r}\). NNReplayer addresses the predicates of the form \(\bigvee_{c=1}^{C}\psi_{c}(x^{0},y)\) where \(C\) represents the number of disjunctive propositions and \(\psi_{i}\) is an affine function of \(x^{0}\) and \(y\). Finally, constraint (5) bounds the entry-wise max-norm error between the weight and bias terms \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\), and the original \(\theta^{l,init}_{w}\) and \(\theta^{l,init}_{b}\) by \(\delta\). Considering the quadratic loss function \(E(\theta^{l}_{w},\theta^{l}_{b})\) and the affine disjunctive forms of \(\Psi(y_{n},x^{0}_{n})\) and \(R(\theta^{i}_{w}x^{i-1}_{n}+\theta^{i}_{b})\), we solve NNRepLayer as a Mixed Integer Quadratic Program (MIQP).
**Theorem 1**.: _Given the predicate \(\Psi(y,x^{0})\), and the input-output data set \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\) sampled from \((\mathcal{X}\times\mathcal{T})\cup(\mathcal{X}_{r}\times\tilde{\mathcal{T}})\) over the sets \(\mathcal{X}\), \(\mathcal{X}_{r}\), \(\mathcal{T}\), and \(\tilde{\mathcal{T}}\) as defined in Problem 1, assume that \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\) are feasible solutions to (1)-(5). Then, \(\Psi(\pi_{\theta_{r}}(x^{0}_{n}),x^{0}_{n})\) is satisfied for all input samples \(x^{0}_{n}\)._
Proof.: Since the feasible solutions \(\theta^{l}_{w}\) and \(\theta^{l}_{b}\) satisfy the hard constraint (4) for the repair data set \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\), \(\Psi(\pi_{\theta_{r}}(x^{0}_{n}),x^{0}_{n})\) is satisfied.
Given Thm. 1, the following Corollary is straightforward.
**Corollary 1**.: _Given the predicate \(\Psi(y,x^{0})\), and the input-output data set \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\) sampled from \((\mathcal{X}\times\mathcal{T})\cup(\mathcal{X}_{r}\times\tilde{\mathcal{T}})\) over the sets \(\mathcal{X}\), \(\mathcal{X}_{r}\), \(\mathcal{T}\), and \(\tilde{\mathcal{T}}\) as defined in Problem 1, assume that \(\theta^{l^{\star}}_{w}\) and \(\theta^{l^{\star}}_{b}\) are the optimal solutions to the NNRepLayer. Then, for all input samples \(x^{0}_{n}\) from \(\{(x^{0}_{n},t_{n})\}_{n=1}^{N}\), \(\Psi(\pi_{\theta_{r}}(x^{0}_{n}),x^{0}_{n})\) is satisfied._
## 4 Evaluation
We explore the applicability of the framework in satisfying the following three types of constraints. **Global constraints** that encode global bounds on the network's output, i.e., \(y\in[y_{min},y_{max}]\). **Input-output constraints** that ensure the network's output \(y\) to stay within a certain bound with respect to the network's input \(x^{0}\), i.e., \(\{\psi_{c}(x^{0},y)\leq 0\}_{c=0}^{C}\), where \(C\) is the number of constraints and \(\psi_{c}\) is an affine function of \(x^{0}\) and \(y\). Finally, **conditional constraints** that encode if-then-else constraints described as \(\{\psi_{c}(x^{0},y)\leq 0,\) if \(x^{0},y\in S_{c}\}_{c=0}^{C}\), where \(C\) specifies the number of conditions, \(\psi_{c}\) is an affine function of \(x^{0}\) and \(y\), and \(S_{c}\subseteq\mathcal{X}\times\mathcal{T}\). We designed a number of experiments to validate that our repair framework can successfully apply these constraints to the policy network. Following our motivation, all experiments were performed on the prosthetic walking gait generation task introduced in Fig. 2. Through these experiments we aim to answer the following questions: (1) Does our method enable the Prosthetic device to address all the three types of aforementioned constraints? (2) Can the repaired controller be employed in a real walking scenario successfully? (3) How robust is the policy repaired through our technique against the unseen constraint-violating samples?
Experimental Setup.In our experiment, we train a policy network \(\pi_{\theta}\) for controlling a prosthesis, which then undergoes the repair process to ensure compliance with the safety constraints. To this end, we first train the model using an imitation learning [33] strategy. For data collection, we conducted a study approved by the Institutional Review Board (IRB), in which we recorded the walking gait of a healthy subject without any prosthesis. Walking data included three inertial measurement units (IMUs) mounted via straps to the upper leg (Femur), lower leg (Shin), and foot. The IMUs acquired both the angle and angular velocity of each limb portion in the world coordinate frame at 100Hz. Ankle angle \(\alpha_{a}\) was calculated as a post process from the foot and lower limb IMUs. We then trained the NN to generate the ankle angle from upper and lower limb IMU sensor values. More specifically, the NN model receives the angle and velocity from the upper and lower limb sensors (network inputs \(x^{0}\)), \(\alpha_{ul},\dot{\alpha}_{ul},\alpha_{ll},\dot{\alpha}_{ll}\), respectively, to predict the ankle angle \(\alpha_{a}\) (network output \(y\)) which is, later, used as the control parameter for a PD controller on the prosthetic. See Fig. 3 for a visualization of the individual sensor readings. We used a sliding window of input variables, denoted as \(dt\) (\(dt=10\) in all our experiments), to account for the temporal influence on the control parameter and to accommodate for noise in the sensor readings. Therefore, the input to the network is \(dt\times|x^{0}|\), or more specifically the current and previous \(dt\) sensor readings. In all experiments, we trained a three-hidden-layer deep policy network with \(32\) ReLU nodes at each hidden layer. After the networks were fully trained we assessed the policy for constraint violations and collected samples for NNRepLayer. We tested NNRepLayer on the last and the second to the last layer of network policy to satisfy the constraints with a subset of the original training data including both adversarial and non-adversarial samples. In all experiments, we used \(150\) samples in NNRepLayer and a held out set of size \(2000\) for testing. Finally, the repaired policies to satisfy global and input-output constraints are tested on a prosthetic device for \(10\) minutes of walking, see Fig. 5. More specifically, the same healthy subject was fitted with an ankle bypass; a carbon fiber structure molded to the lower limb and constructed such that a prosthetic ankle can be attached to allow the able-bodied subject to walk on the prosthesis, as shown in Fig. 2. The extra weight and off-axis positioning of the device incline the individual towards slower, asymmetrical gaits that generates strides out of the original training distribution [34; 10]. The participant is then asked to walk again for \(10\) minutes to assess whether constraints are satisfied.
To evaluate if the repair can be generalized to the unseen adversarial samples, we analyze the _violation degree_. The violation degree is measured as the distance of the network output with respect to the constraint set. For each experiment, we explain how this distance between the output and the constraint set is calculated. We compared our framework with retraining, fine-tuning [26; 35; 27; 7], and the patch-based repair method in [4] (REASSURE). Adversarial samples in the repair data set are hand-labeled for fine-tuning and retraining so that the target outputs satisfy the given predicates. In fine-tuning, as proposed in [26; 35], we used the collected adversarial data set to train all the parameters of the original policy by gradient descent using a small learning rate (\(10^{-4}\)). To avoid over-fitting to the adversarial data set, we trained the weights of the top layer first, and thereafter fine-tuned the remaining layers for a few epochs. The same hand-labeling
Figure 3: Prosthetic device model.
strategy is applied in retraining, except that a new policy is trained from scratch for all original training samples. In both methods, we trained the policy until all the adversarial samples in the repair data set satisfy the given predicates on the network's output. Our code is available on GitHub: [https://github.com/klmajd/NNRepLayer.git](https://github.com/klmajd/NNRepLayer.git).
### Experiments and Results
Global Constraint.The global constraint ensures that the prosthesis control, i.e., \(\alpha_{a}\), stays within a certain range and never outputs an unexpected large value that disturbs the user's walking balance. Additionally, the prosthetic device we utilized in these scenarios contains a parallel compliant mechanism. As such, either the human subject or the robotic controller could potentially drive the mechanism into the hard limits, potentially damaging the device. In our walking tests, we therefore specified global constraints such that the ankle angle stays within the bounds of \([-14,24]\) [deg] regardless of whether it is driven by the human or the robot. In simulation experiments, we enforced artificially strict bounds on the ankle angle \(\alpha_{a}\) to never exceed \(\alpha_{a}=10\) [deg] which is a harder bound to satisfy. We defined the degree of violation as \(0\) if \(\alpha_{a}\in[\alpha_{a}^{min},\alpha_{a}^{max}]\), and \(\min\{|\alpha_{a}-\alpha_{a}^{max}|,|\alpha_{a}-\alpha_{a}^{min}|\}\), otherwise. As shown in Fig. 4 (a)-(b), the repaired network successfully satisfies the constraints in the original faulty regions while maintaining the tracking performance of the controller in the unconstrained regions. Figure 4 (c) demonstrates that the violation degree stays almost zero for even distant originally violating data points after repairing the mid layer. The red control signal in Fig. 5 also shows that our method successfully imposes the control bounds \([-14,24]\) to the actual prosthesis walking test.
Input-output Constraint.Deep neural networks as highly non-linear function approximators have the ability to change the outputs more rapidly than what is feasible for the robotic prosthesis or for the human subject to accommodate. Therefore, we propose an additional constraint over the possible change of control actions from one time-step to the next. This constraint should act to both smooth the control action in the presence of sensor noise, as well as to reduce hard peaks and oscillations in the control action. To capture this constraint as an input-output relationship, we trained the policy network by adding the previous \(dt\) control actions \(\{\alpha_{a}(i)\}_{i=t-dt}^{t-1}\) as inputs to the policy network along the values of upper and lower limb sensors. Imposed constraints in this example follow the form \(|\alpha_{a}(t)-\alpha_{a}(t-1)|\leq\Delta\alpha_{a}^{max}\). In prosthetic walking tests, we bounded the control rate by \(\Delta\alpha_{a}^{max}=2\) [deg/s], and in our simulations we tested \(\Delta\alpha_{a}^{max}=1.5\) and \(\Delta\alpha_{a}^{max}=2\) [deg/s]. Our
Figure 4: Global constraint: (a) ankle angle, \(\alpha_{a}\), (b) the error between the predicted and the reference controls, (c) the violation degree vs. \(L_{2}\)-distance between the test and repair sample inputs.
Figure 5: Real prosthesis walking test results for imposing the global constraint of \([-14,24]\) to the control (shown in red) and bounding the control rate by \(2\) [deg/s] (shown in black). The color bar represents the normalized \(L_{2}\)-distance of each test input to its nearest neighbor in the repair set.
simulation results in Fig. 6 demonstrate that NNRepLayer satisfies both bounds on the control rate which subsequently results in a smoother control output. It can also be observed that NNRepLayer successfully preserves the tracking performance of controller. In this experiment, we defined the violation degree as \(0\) if the \(|\Delta\alpha_{a}(t)|\leq\Delta\alpha_{a}^{max}\) and \(|\Delta\alpha_{a}(t)-\Delta\alpha_{a}^{max}|\), otherwise. Figure 7 (a) demonstrates that the violation degree of NNRepLayer is almost zero by increasing the distance of the test samples from the repair set. Same results are obtained in the actual prosthetic walking tests as shown in Fig. 5. The satisfaction of input-output constraints (black signal) is achieved even for the samples that are distant from the repair set. Finally, in this experiment, applying NNRepLayer to the last layer does not obtain even a feasible solution to the optimization problem (1)-(5).
Conditional Constraint.Depending on the ergonomic needs and medical history of a patient, the attending orthopedic doctor or prosthetic may identify certain body configurations that are harmful, e.g., they may increase the risk of osteoarthritis or musculoskeletal conditions [36, 37]. Following this rationale, we define a region \(\mathcal{S}\) of joint angles space that should be avoided. An example of such a region is demonstrated in Fig. 8 as a grey box \(\mathcal{S}=\{(\alpha_{ul},\alpha_{a})\mid\alpha_{ul}\in[-2,-0.5],\alpha_{a} \in[1,3]\}\) in the joint space of ankle and femur angles. To satisfy this constraint the control rate should be tuned such that the joint ankle and femur angles stay out of set \(\mathcal{S}\). This constraint can be defined as an if-then-else proposition \(\alpha_{ul}\in[-2,-0.5]\implies\left(\alpha_{a}\in(-\infty,1]\right)\lor \left(\alpha_{a}\in[3,\infty)\right)\) which can be formulated as the disjunction of linear inequalities on the network's output. For each given test input and its corresponding output \(\alpha_{a}\), the degree of violation is defined as the distance of \(\alpha_{a}\) to the box if \(\alpha_{a}\) is outside the box, and \(0\) otherwise. Figure 8 demonstrates the output of new policy after repairing with NNRepLayer. As it is shown, our method avoids the joint ankle and femur angles to enter the unsafe region \(S\). Figure 7 (b) also illustrates low output violation degree for the distant test input samples from the repair input set. Finally, we observed that repairing the last layer does not result in a feasible solution.
Comparison w\(\backslash\)Fine-tuning, Retraining, and REASSURE.In each experiment, we demonstrated the violation degree and the control signals of our method compared with fine-tuning, retraining [26, 35, 27, 7], and REASSURE [4]. Comparing to [4], while REASSURE guarantees the satisfaction of constraints in the local repaired linear regions, we showed that this method significantly reduces the performance of network in the repaired local regions, see Figures 4 and 6. This method cannot address the input-output constraints given the faulty samples, and it introduces 500 times more faulty samples compared to our technique. REASSURE cannot also accommodate the conditional constraints. Unlike REASSURE that guarantees the
Figure 6: Input-output constraint: Ankle angles and Ankle angle rates for bounds (a) \(\Delta\alpha_{a}=2\) and (b) \(\Delta\alpha_{a}=1.5\)
Figure 7: The violation degree vs. \(L_{2}\)-distance between the test and repair sample inputs for (a) input-output constraint, and (b) conditional constraint cases.
satisfaction of constraints for the samples in the same linear region as the repaired samples, our technique only guarantees the satisfaction of constraints for the repaired samples. While we empirically showed the generalizability of our technique in a local neighborhood of the repaired samples, our method does not theoretically guarantee the satisfaction of constraints for the unseen adversarial samples. We proposed a sound algorithm in the supplementary materials, Sec. 3, that guarantees the safety for all other unseen samples. Table 1 better illustrates the success of our method in satisfying the constraints while maintaining the control performance. As shown in Table 1, retraining and NNRepLayer both perform well in maintaining the minimum absolute error and the generalization of constraint satisfaction to the unseen testing samples for global and input-output constraints. However, the satisfaction of if-then-else constraints is challenging for retraining and fine-tuning as the repair efficacy is dropped by almost \(30\%\) using these techniques. It also highlights the power of our technique in generalizing the satisfaction of conditional constraints to the unseen cases. For further details on the comparison results, read the supplementary materials, Sec. 3.
## 5 Conclusion & Discussion
In this paper, we introduced an algorithm for training safe neural network controllers that satisfy a formal set of safety constraints. Our approach, NNRepLayer, performs a global optimization step in order to perform layer-wise repair of neural network weights. In real-robot experiments, we have shown that the introduced methodology produces safe neural policies for a lower-leg prosthesis satisfying a variety of constraints. We argue that this type of approach is critical for human-centric and safety-critical applications of robot learning, e.g., the next-generation of assistive robotics.
Discussion.The introduced approach does not generally ensure that for _any_ input the constraints will be satisfied. Instead it guarantees this property for all data points provided at the time of repair. Hence, proper care has to be taken to ensure that the repair process involves representative samples of the variety of inputs seen in the application domain. From a computational vantage point, solving the MIQP underlying NNRepLayer is a demanding process which scales with the size of the network. In our experiments, we successfully repaired NN layers with up to 256 neurons, with global optimization taking between multiple minutes and up to 10 hours (read the supplementary materials, Sec. 3, for the detailed experimental results). Moreover, we showed in the supplementary materials, Sec. 3, that the repair of randomly selected sub-nodes of a hidden layer can accurately repair the network in much shorter time (more that 12 times faster than the full repair). Finally, our approach is limited to repairing individual layers in a network. Early results on iteratively repairing multiple layers are promising and will be reported in the future. However, our approach cannot simultaneously repair multiple layers or the entire network.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{NNRepLayer} & \multicolumn{4}{c}{REASSURE [4]} \\ & RT [s] & MAE & RE [\%] & IB [\%] & RT [s] & MAE & RE [\%] & IB [\%] \\ \hline Global & \(233\pm 150\) & \(1.4\pm 0.11\) & \(99\pm 1\) & \(0.09\pm 0.20\) & \(14\pm 1\) & \(2.3\pm 0.78\) & \(97\pm 1\) & \(0\) \\ Input-output & \(112\pm 122\) & \(0.5\pm 0.03\) & \(98\pm 1\) & \(0.19\pm 0.18\) & \(30\pm 8\) & \(0.6\pm 0.03\) & \(19\pm 4\) & \(85\pm 5\) \\ Conditional & \(480\pm 110\) & \(0.35\pm 0.07\) & \(93\pm 2\) & \(0.11\pm 0.26\) & Infeasible & Infeasible & Infeasible & Infeasible \\ \hline \hline & \multicolumn{4}{c}{Fine-tune} & \multicolumn{4}{c}{Retrain} \\ & RT [s] & MAE & RE [\%] & IB [\%] & RT [s] & MAE & RE [\%] & IB [\%] \\ \hline Global & \(25\pm 13\) & \(1.2\pm 0.03\) & \(97\pm 4\) & \(0.95\pm 0.45\) & \(127\pm 30\) & \(1.4\pm 0.08\) & \(98\pm 3\) & \(0.65\pm 0.40\) \\ Input-output & \(8\pm 2\) & \(0.6\pm 0.03\) & \(88\pm 2\) & \(2.47\pm 0.49\) & \(101\pm 1\) & \(0.5\pm 0.04\) & \(98\pm 1\) & \(0.28\pm 0.32\) \\ Conditional & \(18\pm 3\) & \(0.7\pm 0.10\) & \(72\pm 5\) & \(0.27\pm 0.25\) & \(180\pm 2\) & \(0.31\pm 0.03\) & \(76\pm 2\) & \(0.12\pm 0.35\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The table reports: RT: runtime, MAE: Mean Absolute Error between the repaired and the original outputs, RE: the percentage of adversarial samples that are repaired (Repair Efficacy), and IB: the percentage of test samples that were originally safe but became faulty after the repair (Introduced Bugs). The metrics are the average of 50 runs.
Figure 8: Enforcing the conditional constraints to keep the joint femur-ankle angles out of the grey box.
#### Acknowledgments
This work was partially supported by the National Science Foundation under grants CNS-1932068, IIS-1749783, and CNS-1932189.
|
2304.11925 | Data-driven modelling of brain activity using neural networks, Diffusion
Maps, and the Koopman operator | We propose a machine-learning approach to model long-term out-of-sample
dynamics of brain activity from task-dependent fMRI data. Our approach is a
three stage one. First, we exploit Diffusion maps (DMs) to discover a set of
variables that parametrize the low-dimensional manifold on which the emergent
high-dimensional fMRI time series evolve. Then, we construct
reduced-order-models (ROMs) on the embedded manifold via two techniques:
Feedforward Neural Networks (FNNs) and the Koopman operator. Finally, for
predicting the out-of-sample long-term dynamics of brain activity in the
ambient fMRI space, we solve the pre-image problem coupling DMs with Geometric
Harmonics (GH) when using FNNs and the Koopman modes per se. For our
illustrations, we have assessed the performance of the two proposed schemes
using a benchmark fMRI dataset with recordings during a visuo-motor task. The
results suggest that just a few (for the particular task, five) non-linear
coordinates of the high-dimensional fMRI time series provide a good basis for
modelling and out-of-sample prediction of the brain activity. Furthermore, we
show that the proposed approaches outperform the one-step ahead predictions of
the naive random walk model, which, in contrast to our scheme, relies on the
knowledge of the signals in the previous time step. Importantly, we show that
the proposed Koopman operator approach provides, for any practical purposes,
equivalent results to the FNN-GH approach, thus bypassing the need to train a
non-linear map and to use GH to extrapolate predictions in the ambient fMRI
space; one can use instead the low-frequency truncation of the DMs function
space of L^2-integrable functions, to predict the entire list of coordinate
functions in the fMRI space and to solve the pre-image problem. | Ioannis K. Gallos, Daniel Lehmberg, Felix Dietrich, Constantinos Siettos | 2023-04-24T09:08:12Z | http://arxiv.org/abs/2304.11925v1 | Data-driven modelling of brain activity using neural networks, Diffusion Maps, and the Koopman operator
###### Abstract
We propose a machine-learning approach to model long-term out-of-sample dynamics of brain activity from task-dependent fMRI data. Our approach is a three stage one. First, we exploit Diffusion maps (DMs) to discover a set of variables that parametrize the low-dimensional manifold on which the emergent high-dimensional fMRI time series evolve. Then, we construct reduced-order-models (ROMs) on the embedded manifold via two techniques: Feedforward Neural Networks (FNNs) and the Koopman operator. Finally, for predicting the out-of-sample long-term dynamics of brain activity in the ambient fMRI space, we solve the pre-image problem coupling DMs with Geometric Harmonics (GH) when using FNNs and the Koopman modes per se. For our illustrations, we have assessed the performance of the two proposed schemes using a benchmark fMRI dataset with recordings during a visuo-motor task. The results suggest that just a few (for the particular task, five) non-linear coordinates of the high-dimensional fMRI time series provide a good basis for modelling and out-of-sample prediction of the brain activity. Furthermore, we show that the proposed approaches outperform the one-step ahead predictions of the naive random walk model, which, in contrast to our scheme, relies on the knowledge of the signals in the previous time step. Importantly, we show that the proposed Koopman operator approach provides, for any practical purposes, equivalent results to the FNN-GH approach, thus bypassing the need to train a non-linear map and to use GH to extrapolate predictions in the ambient fMRI space; one can use instead the low-frequency truncation of the DMs function space of \(L^{2}\)-integrable functions, to predict the entire list of coordinate functions in the fMRI space and to solve the pre-image problem.
Brain dynamics task-fMRI Machine learning Diffusion maps Geometric harmonics Koopman operator Reduced order models Numerical Analysis
## 1 Introduction
Understanding and modelling the emergent brain dynamics has been a primary challenge in contemporary neuroscience (Bullmore and Sporns, 2009; Ermentrout and Terman, 2010; Jorgenson et al., 2015; Siettos and Starke, 2016;
Breakspear, 2017; Sip et al., 2023). Towards this goal, different approaches at a variety of scales have been introduced, ranging from individual neuron dynamics to the behaviour of different regions of neurons across the brain (Izhikevich, 2007; Deco et al., 2008; Bullmore and Sporns, 2012; Papo et al., 2014; Sietto and Starke, 2016; Breakspear, 2017; Sip et al., 2023). Some of them are physics-biologically informed models, thus derived directly from first principles (such as the Hodgkin-Huxley (Hodgkin and Huxley, 1952; Nelson and Rinzel, 1998; McCormick et al., 2007; Spiliotis et al., 2022), the FitzHugh-Nagumo (FitzHugh, 1955; Nagumo et al., 1962), phase-oscillators (Kopell and Ermentrout, 1986; Laing, 2017; Skardal and Arenas, 2020)) and neural mass models (Segneri et al., 2020; Taher et al., 2020), the Baloon model (Buxton et al., 1998) and Dynamic Causal Modelling (DCM) (Penny et al., 2004a)) (for a review of the various dynamic models see also (Izhikevich, 2007)). On the other hand, there is the data-driven approach, exploiting a wide range of methods (see also (Heitmann et al., 2018)) extending from independent component analysis (Beckmann and Smith, 2004), to Granger causality-based models (Seth, 2010; Seth et al., 2015; Friston et al., 2013; Protopapa et al., 2014; Kugiumtzis and Kimiskidis, 2015; Protopapa et al., 2016; Kugiumtzis et al., 2017; Almpanis and Siettos, 2020), phase-synchronization (Mormann et al., 2000; Rudrauf et al., 2006; Jirsa and Muller, 2013; Zakharova et al., 2014; Mylonas et al., 2016; Scholl, 2022) and network-based ones (Kopell and Ermentrout, 2004; Spiliotis et al., 2022; Petkoski and Jirsa, 2019). For an extended review of the various multiscale approaches, see (Siettos and Starke, 2016).
Most of the multiscale data-driven modelling approaches are "blended", i.e., they rely on EEG, MEG and fMRI data to calibrate biological-inspired models or to build surrogate models for the emergent brain activity. Within, this framework, machine learning (ML) has also been exploited to build surrogate dynamical models from neuroimaging data (Grossberg and Merrill, 1992; Niv, 2009; Richiardi et al., 2013; Suk et al., 2016; Gholami Doborjeh et al., 2018; Sun et al., 2022). However, when it comes to fMRI data, most-often consisting of hundreds of millions of measurements/features along space and time, one has to confront the so-called "curse of dimensionality". Problems arising from the "curse of dimensionality" in machine learning can manifest in a variety of ways, such as the excessive sparsity of data, the multiple comparison problem, the multi-collinearity of data, or even the poor generalization of the constructed surrogate models (Phinyomark et al., 2017; Altman and Krzywinski, 2018). To deal with these problems, methodological advances have been made involving random field theory and strict (e.g. corrected for multiple comparison problem) hypothesis testing to make reliable inferences in a unified framework (e.g. the Statistical Parametrical Mapping (SPM) (Penny et al., 2011)). To this day, this approach serves as the standard approach of processing fMRI time-series, relying mainly on a Generalized Linear Model approach to infer about the activity of certain voxels or regions of them. Despite the fact that SPM has been instrumental in analyzing task-related fMRI data, at the same time imposes critical assumptions about the structure and properties of data and, thus, setting limits to the problems that can be solved (Madhyastha et al., 2018).
Another effective way of dealing with problems related to the curse of dimensionality of brain signaling activity, is to represent the dataset on a low-dimensional embedding space (Belkin and Niyogi, 2003; Coifman and Lafon, 2006; Ansuini et al., 2019) using non-linear manifold learning (Gallos et al., 2021). For example, Qiu et al. (2015) used manifold learning for the prediction of dynamics of brain networks with aging by imposing a Log-Euclidean Riemannian manifold structure on the brain and then using a framework based on Locally Linear Embedding (Roweis and Saul, 2000) to uncover the manifold. Pospelcov et al. (2021) employed both linear and non-linear dimensionality reduction techniques with the aim of discriminating fMRI resting state recordings of subjects between acquisition and extinction of an experimental fear condition. The differences in terms of interclass-intraclass distance and the classification process took part on the low dimensional space, retaining up to 10 dimensions. Non-linear manifold learning algorithms outperformed the linear ones with Laplacian eigenmaps (Belkin and Niyogi, 2003) producing the best results. Gao et al. (2021) used Diffusion Maps to demonstrate that task-related fMRI data of different nature span to a similar low-dimensional embedding. In a series of works, (Gallos et al., 2021, 2021, 2022), we have applied linear and non-linear manifold learning techniques (such as Multi-Dimensional Scaling (Cox and Cox, 2008), ISOMAP (Tenenbaum et al., 2000), Diffusion Maps (Coifman and Lafon, 2006; Nadler et al., 2006, 2008)) to construct embedded functional connectivity networks from fMRI data. All the above studies provided evidence that dimensionality reduction may lead to a better classification performance, i.e., a finer detection of biomarkers. However, the above studies focused more on the clustering/classification problem than building dynamical models to predict the brain dynamics. Thiem et al. (2020) presented a data-driven methodology based on DMs to discover coarse variables, ANNs and Geometric Harmonics to learn the dynamic evolution laws of different versions of the Kuramoto model.
Here, we propose a three-step machine learning approach for the construction of surrogate dynamical models from real task fMRI data in the physical/ambient high dimensional space, thus dealing with the curse of dimensionality by learning the dynamics of brain activity on a low-dimensional space. In particular, we first use Diffusion Maps to learn an effective non-linear low-dimensional manifold. Then we learn and predict the embedded brain dynamics based on the set of variables that span the low-dimensional manifold. For this task, we use both FNNs and the Koopman Operator framework (Mezic, 2013; Williams et al., 2015; Brunton et al., 2016; Li et al., 2017; Bollt et al.,
2018; Dietrich et al., 2020; Lehmberg et al., 2021). Finally, we solve the pre-image problem, i.e. the reconstruction of the predictions in the original ambient fMRI space, using Geometric Harmonics (GA) (Coifman and Lafon, 2006; Dsilva et al., 2013; Papaioannou et al., 2021; Evangelou et al., 2023) for the FNN predictions, and the Koopman modes per se (Li et al., 2017; Lehmberg et al., 2021) for the Koopman operator predictions.
For our illustrations, we used a benchmark fMRI dataset, recorded during an attention to a visual motion task. The proposed approach provides out-of-sample long-term predictions, thus reconstructing accurately in the ambient space the brain activity in response to the visual/attention stimuli in the five brain regions that are known to be the key ones for the particular task from previous studies. Furthermore, we show that the Koopman operator numerical-analysis/based scheme provides numerical approximations with similar accuracy to the FNN-GH scheme, while bypassing both the construction of a non-linear surrogate model on the DMs, and the use of GH to extrapolate predictions in the ambient fMRI space.
## 2 Benchmark dataset and Signal extraction
For our illustrations, we focus on a benchmark fMRI dataset of a single subject during a task that targets attention to visual motion. This particular dataset has served as a benchmark in many studies (Buchel and Friston, 1997; Friston et al., 2003; Penny et al., 2004b; Almpanis and Siettos, 2020) and is publicly available for download from the official SPM website [https://www.fil.ion.ucl.ac.uk/spm/data/attention/](https://www.fil.ion.ucl.ac.uk/spm/data/attention/). The images are already smoothed, spatially normalised, realigned and slice-time corrected.
In the experiment (Buchel and Friston, 1997), subjects observed a black screen displaying a few white dots. The experimental design consisted of specific epochs where the dots behave as either static or moving. In between these epochs, there was also an intermediate epoch where only a static picture without dots was present (i.e. a fixation phase, which is treated as baseline). The subjects were guided to keep focused for any change concerning the moving dots, even, when no changes in the intermediate epochs really existed. Thus, there were four distinct experimental conditions: "fixation", "non attention" where the dots were moving, but subjects did not need to pay attention on the screen, "attention" where subjects needed to pay attention on existing (if any) changes regarding the movement of dots (i.e., possible changes in velocity or acceleration) and "static" where the dots were stationary. The dataset consists of 4 runs stuck together, where the 10 first scans of each run were discarded for the elimination of non-desirable magnetic effects. Thus, the length of the fMRI dataset is 360 scans.
Here, we have first parcellated the brain into 116 brain regions of interest (ROIs) as derived by the use of Automated Anatomical Labeling (AAL) (Tzourio-Mazoyer et al., 2002) from the fMRI data. After the formulation of the ROIs by AAL, the average of the Blood Oxygen Level Dependent (BOLD) signal from all voxels located in each region was calculated at each time point. Out of all 116 ROIs and due to the limited field of view of the fMRI acquisition, some parts of the cerebellum were excluded as there was no existing signal (i.e. there was barely any change in the BOLD signal throughout the experiment). Specifically, these regions were the right and left part of Cerebellum_10, Cerebellum_7b and left part of Cerebellum_9. The remaining 111 time series were linearly detrended and standardized for further analysis.
## 3 Methodology
Our data-driven machine-learning methodology for modelling grain activity deploys in three main steps. First, as a zero-step, we follow the standard procedure regarding fMRI data preprocessing with the general linear model to identify the voxels of statistically significant BOLD activity. Then, at the first step, we use parsimonious Diffusion Maps to identify a set of variables that parametrize the low-dimensional manifold where the embedded fMRI BOLD time-series evolve. Based on them, we construct reduced-order models (ROMs), based either on FNNs or Koopman operator. Finally, at the third step, we use either Geometric Harmonics (GH) (Coifman and Lafon, 2006), when we use FNNs, or Extended Dynamic Model Decomposition (EDMD) (Williams et al., 2015) when we use Koopman operator, to solve the pre-image problem, i.e. to provide out-of-sample predictions in the ambient phase space of BOLD signals. In the following sections, we describe the above steps in detail. A schematic of the three-step methodology is also shown in Fig. 1.
### Step 0. Data Preprocessing using the General Linear Model
For the past 20 years, the General Linear Modelling (GLM) approach has been the cornerstone of standard fMRI analysis (Friston et al., 1994, 1995; Worsley and Friston, 1995). Within the GLM framework, time series associated with BOLD signal at the voxel level are modeled as a weighted sum of one or more predictor variables (i.e. in
relationship to the experimental conditions) including an error term for the unobserved/unexplained information in the model. The ultimate goal of the aforementioned approach is to determine whether a predictor variable does contribute (and to what extent) to the variability observed in the data (e.g., a known pattern of stimulation induced by specific experimental settings).
Specifically, we can model the amplitude of the BOLD signal with fMRI in the \(i\)-th brain region, as a weighted sum of some known predictor variables \(\mathbf{u}_{1},...,\mathbf{u}_{p}\) each scaled by a set of parameters \(\mathbf{\beta}\):
\[\begin{array}{l}\mathbf{x_{i,1}}=\mathbf{u_{1,1}}\mathbf{\beta_{1}}+\mathbf{u_{1,2}}\mathbf{ \beta_{2}}+...\mathbf{u_{1,p}}\mathbf{\beta_{p}}+\mathbf{\epsilon_{i,1}}\\ \mathbf{x_{i,2}}=\mathbf{u_{2,1}}\mathbf{\beta_{1}}+\mathbf{u_{2,2}}\mathbf{\beta_{2}}+...\mathbf{u_{2,p}}\mathbf{\beta_{p}}+\mathbf{\epsilon_{i,2}}\\ \mathbf{...}\\ \mathbf{x_{i,N}}=\mathbf{u_{N,1}}\mathbf{\beta_{1}}+\mathbf{u_{N,2}}\mathbf{\beta_{2}}+...\mathbf{u_{N,p}}\mathbf{\beta_{p}}+\mathbf{\epsilon_{i,N}}\end{array} \tag{1}\]
A more compact notation leads to the following formula:
\[\mathbf{x_{i}}=\mathbf{U}\mathbf{\beta}+\mathbf{\epsilon}, \tag{2}\]
where \(\mathbf{x_{i}}=[\mathbf{x_{i,1}},\mathbf{x_{i,2}},\ldots,\mathbf{x_{i,N}}]^{T}\in\mathbb{ R}^{N}\) is the column vector that stores the BOLD responses in all time instances, \(\mathbf{U}=\begin{bmatrix}\mathbf{u_{1,1}},&\ldots&\mathbf{u_{1,p}}\\ \cdots&\ldots&\cdots\\ \mathbf{u_{N,1}}&\ldots&\mathbf{u_{N,p}}\end{bmatrix}\in\mathbb{R}^{N\times p}\) is the design matrix with the column \(\mathbf{j}\)-th representing the time series of the predictor variable \(\mathbf{j}\) and \(\mathbf{\beta}=[\mathbf{\beta_{1}},\mathbf{\beta_{2}},\ldots,\mathbf{\beta_{p}}]^{T}\in\mathbb{ R}^{p}\) is a vector of the unknown parameters that set the contribution of each of the predictors to the BOLD signal \(\mathbf{x_{i}}\). Finally, \(\mathbf{\epsilon_{i}}=[\mathbf{\epsilon_{i,1}},\mathbf{\epsilon_{i,2}},\ldots,\mathbf{ \epsilon_{i,N}}]^{T}\in\mathbb{R}^{N}\) is a vector containing the corresponding (modelling) errors.
In this study, the standard GLM approach was used inside the SPM framework (Penny et al., 2011) and its toolbox extension AAL (Tzourio-Mazoyer et al., 2002). Out of the 3 available methods for automatic anatomical labelling of the global functional activation map, we used the cluster labeling. This is a rational choice since we wanted to localize the activated clusters in the level of anatomical regions. The final results contain which percentage of the activated cluster belongs to a specific anatomical region, and also which percentage of the region is part of the activated cluster (Tzourio-Mazoyer et al., 2002).
Here, the experimental variables \(\mathbf{x_{i}}\) are the following: "fixation", "static", "attention", "non-attention". We considered as activated clusters, 10 or more nearby voxels that passed the contrast of "attention", "non attention" and "static" vs
Figure 1: Schematic of the proposed machine learning based methodology for modelling and predicting the dynamics of brain activity from task-depended fMRI data.
"fixation" which is here treated as a low level baseline (Buchel et al., 1998). We considered two different thresholds, one of \(\mathbf{p<0.05}\) corrected for a family wise error (FWE) and another more liberal threshold of \(\mathbf{p<0.01}\) (uncorrected). The analysis was restricted only on voxels included in the predefined regions of AAL (eg. by using the atlas as an inclusive mask). In this way, we wanted to determine the anatomical regions that are actually "active" throughout the experiment in respect with any of the experimental variables other than fixation (e.g., when either when subject is paying or not paying attention to moving or static dots).
Step 1: Identification of an Embedded Manifold in the fMRI BOLD signals via Parsimonious Diffusion Maps
Diffusion maps is a non-linear dimensionality reduction/ manifold learning algorithm introduced by Coifman and Lafon (2006a) that exploits the inherent relationship among heat diffusion and random walk Markov chain. The main idea is that jumping from one data point to another in a random walk regime, it is more likely to jump to nearby points than points that are further away. The algorithm produces a mapping of the dataset in the Euclidean space, whose coordinates are computed utilizing the eigenvalues and the eigenvectors of a diffusion operator on the data. An embedding in the low-dimensional diffusion maps space is obtained by the projections on the eigenvectors of a normalized graph Laplacian (Belkin and Niyogi, 2003). Specifically, given a dataset of \(\mathbf{N}\) points, \(\mathbf{X}=[\mathbf{x_{1}}\quad\mathbf{x_{2}}\quad\mathbf{\dots}\quad\mathbf{x_{N}}]\), \(\mathbf{x_{i}\in\mathbb{R}^{M}}\) is the column vector containing the BOLD signal in \(\mathbf{N}\) instances of the \(\mathbf{i}\)-th brain region, \(\mathbf{M}\) corresponds to the number of brain regions, we construct an affinity matrix \(\mathbf{W}\in\mathbb{R}^{\mathbf{N}\times\mathbf{N}}\), defined by a kernel, any square integrable function, \(\mathbf{w}\in\mathbf{L^{2}:\mathbf{X}\times\mathbf{X}\rightarrow\mathbb{R}+^{0}}\), between all pairs of points \(\mathbf{x_{i}}\) and \(\mathbf{x_{j}}\). The affinity matrix is most often computed though the use of the so-called Gaussian heat kernel with elements:
\[\mathbf{w_{i,j}=\exp\left(-\frac{||\mathbf{x_{i}}-\mathbf{x_{j}}||_{2}^{2}}{2\mathbf{\sigma}} \right), \tag{3}\]
where \(||.||_{2}\) computes the distance between two points using the Euclidean \(\mathbf{L_{2}}\) norm (any metric could be used) between points, and \(\mathbf{\sigma}\) serves as a scale parameter of the kernel. A Gaussian heat kernel in Eq. 3 produces both symmetric and positive semi-definite affinity square matrix \(\mathbf{W}\). The property is fundamental and the one that allows for the interpretation of weights as scaled probabilities. We thus formulate the diagonal normalization matrix \(\mathbf{K}\) with elements
\[\mathbf{k_{ii}}=\sum_{\mathbf{j=1}}^{\mathbf{N}}\mathbf{w_{ij}}. \tag{4}\]
Here, a family of anisotropic diffusions can be parameterized by adding a parameter \(\mathbf{\alpha}\) which controls the amount of influence of the density in the data. This constant can be used to normalize the affinity matrix \(\mathbf{W}\) as:
\[\tilde{\mathbf{W}}=\mathbf{K}^{-\mathbf{\alpha}}\mathbf{W}\mathbf{K}^{-\mathbf{\alpha}} \tag{5}\]
For \(\mathbf{\alpha=0}\), we get the normalized graph Laplacian, \(\mathbf{\alpha=0.5}\), we get the Fokker-Planck diffusion, and for \(\mathbf{\alpha=1}\), we get the Laplace-Beltrami operator (Thiem et al., 2020).
Next, we renormalize the matrix \(\tilde{\mathbf{W}}\) with the diagonal matrix \(\mathbf{\tilde{K}}\), with elements \(\mathbf{\tilde{k}_{ii}=\sum_{\mathbf{j=1}}^{\mathbf{N}}\tilde{\mathbf{w}_{ij}}.\ \mathbf{P \in\mathbb{R}^{\mathbf{N}\times\mathbf{N}}}}\) is a Markovian row stochastic matrix which is also called diffusion matrix given by:
\[\mathbf{P=\tilde{K}^{-1}\tilde{\mathbf{W}}.} \tag{6}\]
In this stage, the elements of the diffusion matrix can be seen as the scaled probabilities of one point jumping to another in a random walk sense. Consequently, taking the power \(\mathbf{t}\) of the diffusion matrix \(\mathbf{P}\) is essentially identical of observing \(\mathbf{t}\) steps forward of a Markov chain process on the data points. The element \(\mathbf{P^{t}(\mathbf{x_{i}},\mathbf{x_{j}})}\) denotes the transition probability of jumping from point \(\mathbf{x_{i}}\) to point \(\mathbf{x_{j}}\) in \(\mathbf{t}\) steps.
Applying singular value decomposition (SVD) on \(\mathbf{P}\), we get:
\[\mathbf{P=\Psi\ \Lambda\Psi^{T},} \tag{7}\]
with \(\mathbf{\Lambda}\) being the diagonal matrix that stores the \(\mathbf{M}\) eigenvalues and \(\mathbf{\Psi}\in\mathbb{R}^{\mathbf{N}\times\mathbf{N}}\) the eigenvectors of \(\mathbf{P}\).
These eigenvectors are the discrete approximations of the eigenfunctions of the corresponding to the parameter \(\mathbf{\alpha}\) (see above) kernel operator on the manifold. Thus, such eigenfunctions provide an orthonormal basis of the \(\mathbf{L^{2}}\) space, i.e. of the space of square-integrable functions (Coifman and Lafon, 2006a).
The eigenvalues that correspond to the eigenvectors in descending order are \(\mathbf{\lambda_{0}=1\geq\lambda_{1}\geq\lambda_{2}\...\geq\lambda_{k}}\) with the first being the trivial eigenvalue that is equal to 1 (\(\mathbf{P}\) is a Markovian matrix) and \(\mathbf{k}\) being the target embedding
dimension. The embedding is obtained by mapping the points from the original/ ambient space to the diffusion maps space by preserving the diffusion distance among all data points in the dataset (Nadler et al., 2006). Since the diffusion distance takes into account all possible paths between data points, it is thus more robust against noise perturbations (Coifman and Lafon, 2006a) (comparing for example with the geodesic distance).
The coordinates of the nonlinear projection of a point \(\mathbf{x}_{i}\) in the \(\mathbf{k}\) dimensional space spanned by the first \(\mathbf{k}\) DMs \(\{\mathbf{\psi_{1}},\mathbf{\psi_{2}},\dots,\mathbf{\psi_{k}}\}\) read:
\[\mathbf{\mathcal{R}}(\mathbf{x}_{i})\equiv\mathbf{y}_{i}=[\mathbf{y}_{i,1},\mathbf{y}_{i, 2},\dots,\mathbf{y}_{i,k}]^{T}=[\mathbf{\lambda}_{i}^{t}\mathbf{\psi}_{i,1},\mathbf{\lambda}_{ 2}^{t}\mathbf{\psi}_{i,2},\dots,\mathbf{\lambda}_{k}^{t}\mathbf{\psi}_{i,k}]^{T},\mathbf{i}= \mathbf{1},\mathbf{2},\dots,\mathbf{N}. \tag{8}\]
where, \(\mathbf{\psi}_{i,\mathbf{l}}\) is the \(\mathbf{i}\)-th element of \(\mathbf{\psi_{1}},\mathbf{l}=\mathbf{1},\mathbf{2},\dots,\mathbf{k}\). For all practical purposes, the embedding dimension is determined by the spectral gap in the eigenvalues of the final decomposition. A numerical gap (or a sharp decrease) between the first few eigenvalues and the rest of the eigenspectrum would indicate that a few eigenvectors could be adequate for the approximation of the diffusion distance between all pairs of points (Coifman and Lafon, 2006a). A compact version of the algorithm is the following:
1. **Input:** linearly detrended data set \(\mathbf{X}=[\mathbf{x}_{1}\quad\mathbf{x}_{2}\quad\dots\quad\mathbf{x}_{N}]\), \(\mathbf{x}_{i}\in\mathbb{R}^{M}\).
2. Compute the Gaussian heat kernel with elements \(\mathbf{w}_{i,\mathbf{j}}=\mathbf{exp}\left(-\frac{||\mathbf{x}_{i}-\mathbf{x}_{j}||_{ 2}^{2}}{2\mathbf{\sigma}}\right),\mathbf{i},\mathbf{j}=\mathbf{1},\mathbf{2},\dots,\mathbf{N}\).
3. Apply the normalization on \(\mathbf{W}\) (Eq. 5) and renormalization (Eq. 6) to compute the diffusion matrix \(\mathbf{P}\);
4. Apply SVD on matrix \(\mathbf{P}\) (Eq. 7) and keep the first \(\mathbf{k}\) eigenvectors corresponding to the \(\mathbf{k}\) largest eigenvalues.
5. **Output:**\(\mathbf{\Psi}_{\mathbf{k}}=[\mathbf{\psi_{1}}\quad\mathbf{\psi_{2}}\quad\dots\mathbf{\psi}_{k}] \in\mathbb{R}^{N\times\mathbf{k}}\), \(\mathbf{y}_{i}=[\mathbf{y}_{i,1},\mathbf{y}_{i,2},\dots,\mathbf{y}_{i,k}]^{T}\), \(\mathbf{i}=\mathbf{1},\mathbf{2},\dots,\mathbf{N}\) (see Eq.(8).
Next, from the initial set of eigenvectors, we retain only the \(\mathbf{d}\) parsimonious eigendimensions as proposed in (Dsilva et al., 2018). Specifically, using a local linear function,
\[\mathbf{\psi}_{i,\mathbf{l}}\approx\mathbf{c}_{i,\mathbf{l}}+\mathbf{\beta}_{i,\mathbf{l}}^{T}\mathbf{ \Psi}_{i,\mathbf{l}-1},\quad\mathbf{\Psi}_{i,\mathbf{l}-1}=[\mathbf{\psi}_{i,1},...,\mathbf{\psi} _{i,\mathbf{l}-1}]^{T}\in\mathbb{R}^{\mathbf{l}-1}, \tag{9}\]
\(\mathbf{\alpha}_{i,\mathbf{l}}\in\mathbb{R},\mathbf{\beta}_{i,\mathbf{l}}\in\mathbb{R}^{\mathbf{l }-1}\) we approximate each point of \(\mathbf{\psi}_{i,\mathbf{l}}\) based on the remaining data points \(\mathbf{\psi}_{i,\mathbf{l}-1}\). This leads to the following optimization problem (Thiem et al., 2020):
\[\underset{\mathbf{c},\mathbf{\beta}}{\text{argmin}}\,\sum_{i\neq j}\mathbf{K}\Big{(}\mathbf{ \Psi}_{i,\mathbf{l}-1},\mathbf{\Psi}_{j,\mathbf{l}-1}\Big{)}\Big{(}\mathbf{\psi}_{j,\mathbf{l}}- \left(\mathbf{c}+\mathbf{\beta}^{T}\mathbf{\Psi}_{j,\mathbf{l}-1}\right)\Big{)}^{2}, \tag{10}\]
where \(\mathbf{K}\) is the Gaussian kernel. Finally, the computation of the normalized error \(\mathbf{c}\mathbf{r}_{\mathbf{k}}\) is done through the use of leave-one out cross validation for each of the local linear fit. The normalized error is defined as:
\[\mathbf{c}\mathbf{r}_{\mathbf{l}}=\sqrt{\frac{\sum_{i=1}^{N}\left(\mathbf{\psi}_{i,\mathbf{l}}- \left(\mathbf{\alpha}_{i,\mathbf{l}}+\mathbf{\beta}_{i,\mathbf{l}}^{T}\mathbf{\Psi}_{i,\mathbf{l}-1} \right)\right)}{\sum_{i=1}^{N}(\mathbf{\psi}_{i,\mathbf{l}})^{2}}}. \tag{11}\]
A small or negligible error \(\mathbf{c}\mathbf{r}_{\mathbf{l}}\) denotes that \(\mathbf{\psi}_{\mathbf{l}}\) can be actually predicted from the remaining eigenvectors \(\mathbf{\psi_{1}},\mathbf{\psi_{2}},...,\mathbf{\psi}_{\mathbf{l}-1}\) and thus is a repeated eigendirection (i.e., \(\mathbf{\psi}_{\mathbf{l}}\) is considered a harmonic of the previous eigenmodes). Therefore, only the eigenvectors that exhibit a large \(\mathbf{c}\mathbf{r}_{\mathbf{l}}\) are selected in a way of seeking the most parsimonious representation. More information regarding the choice of the most parsimonious eigenvectors can be found in (Dsilva et al., 2018), (Lee et al., 2020) and (Galaris et al., 2022).
A compact pseudocode for identifying parsimonious eigenvectors is the following:
1. **Input:** Set of \(\mathbf{\Psi}_{\mathbf{k}}\in\mathbb{R}^{N\times\mathbf{k}}\), \(\mathbf{k}<<\mathbf{M}\) corresponding to the largest eigenvalues from the application of Diffusion Maps and \(\mathbf{d}\) the number of parsimonious eigenvectors to retain.
2. Solve the optimization problem given by Eq.(10) and calculate the normalized error \(\mathbf{c}\mathbf{r}_{\mathbf{l}}\) given by Eq.(11).
3. **Output:**\(\mathbf{\Psi^{\prime}}_{\mathbf{d}}\in\mathbb{R}^{N\times\mathbf{d}}\), with columns the parsimonious DMs \(\mathbf{\psi}_{\mathbf{j}}^{\prime}\in\mathbb{R}^{N},\mathbf{j}=\mathbf{1},\mathbf{2},\dots,\mathbf{d}\) corresponding to the \(\mathbf{d}\) largest \(\mathbf{c}\mathbf{r}_{\mathbf{l}}\)s.
### Step 2. Construction of Diffusion Maps-based Reduced order models
After the reduction of space dimension to a set of \(\mathbf{d}\) of parsimonious DMs components, we proceed to the construction of reduced order (regression) models (ROMs).
Here, we have used two approaches for the construction of ROMs and lifting back to the original ambient space: (a) via FNNs and Geometric Harmonics, and, (b) via the Koopman modes/EDMD. In both cases, the general form of a ROM reads:
\[\boldsymbol{\psi^{\prime}_{i+1,t}}=\boldsymbol{f}(\boldsymbol{\Psi^{\prime}_{i,t }},\boldsymbol{I_{i,s}},\mathbf{b})+\boldsymbol{\epsilon_{i,t}},\quad\boldsymbol {l=1,2,\ldots,d,i=1,2,\ldots,N-1}, \tag{12}\]
where \(\boldsymbol{\Psi^{\prime}_{i,d}}=[\boldsymbol{\psi^{\prime}_{i,1}},\ldots, \boldsymbol{\psi^{\prime}_{i,t}},\ldots,\boldsymbol{\psi^{\prime}_{i,d}}]^{T} \in\mathbb{R}^{d}\), is the vector containing the \(\boldsymbol{i}\)-th elements of the \(\boldsymbol{d}\) parsimonious DMS (\(\boldsymbol{\psi^{\prime}}_{i,t}\) denotes the \(\boldsymbol{i}\)-th element of the \(\boldsymbol{l}\)-th parsimonious DM), \(\boldsymbol{b}\) denotes a set of parameters of the model, \(\boldsymbol{\mathbf{I_{i,s}}}\) the external stimuli \(\boldsymbol{s}\) indicating specific experimental conditions at the time instant \(\boldsymbol{i}\), and \(\boldsymbol{\epsilon_{i,t}}\) is the modelling error. We note that after training, predictions were done iteratively: we set the initial conditions at the last point of the training set, and then iterate the ROMs to produce the long-term predictions.
In order to compare the efficiency of the two approaches, we applied also a simple naive random walk (NRW) model in a direct way for one-step ahead predictions: predictions at time \(\boldsymbol{i}+\boldsymbol{1}\) are equal to the last observed values, i.e., \(\boldsymbol{\psi_{i+1,t}}=\boldsymbol{\psi_{i,t}}\).
Thus, in contrast to the predictions via the embedded ROMs that are produced iteratively, i.e. without any knowledge of the previous values, but the initial conditions, the predictions made via the NRW model are based on the knowledge of the values of the previous points of the test set.
#### 3.3.1 Reduced order models with FNNs
As a first reduced order modelling approach on the discovered manifold \(\boldsymbol{\mathcal{M}}\), we used \(\boldsymbol{d}\) FNNs, for each one of the embedded coordinates \(\boldsymbol{\psi^{\prime}_{i}}\), \(\boldsymbol{l=1,2,\ldots,d}\), with one hidden layer, \(\boldsymbol{H}\) units and a linear output layer with \(\boldsymbol{n}\) units that can be written compactly as
\[\boldsymbol{\psi^{\prime}}_{i+1,t}=\boldsymbol{\mathbf{w^{T}_{1,o}}}\mathbf{S }(\boldsymbol{\mathbf{W^{T}_{1,t}}}\boldsymbol{\mathbf{\Psi^{\prime}_{i,d}}} +\mathbf{b_{1,t}})+\boldsymbol{b_{o,t}},\quad\boldsymbol{l=1,2,\ldots,d}. \tag{13}\]
\(\mathbf{S}(\boldsymbol{\cdot})\) is the activation function (based on the above formulation it is assumed to be the same for all networks and nodes in the hidden layer), \(\mathbf{b_{1,t}}\in\mathbb{R}^{\boldsymbol{H}}\) is the vector containing the biases of the nodes of the first hidden layer, \(\mathbf{W_{1,t}}\in\mathbb{R}^{\boldsymbol{d}\times\boldsymbol{H}}\) is the matrix containing the weights between the input and the first hidden layer, \(\mathbf{w_{o,t}}\in\mathbb{R}^{\boldsymbol{H}}\) denotes the vector containing the weights between the hidden layer and the output layer, and finally \(\boldsymbol{b_{o,t}}\) is the bias of the output.
The total number of input units were matched to the embedding coordinates of the manifold plus the external stimuli \(\boldsymbol{\mathbf{I}}\) (i.e., \(\boldsymbol{d}\) embedding coordinates plus 4 external stimuli: attention, non attention, static and fixation) while there was one output unit. We used the logistic transfer function \(\boldsymbol{S(x)}=\frac{\boldsymbol{1}}{\boldsymbol{1+e}^{-x}}\) as the activation function for all neurons in the hidden layer. A learning rate decay parameter was also used as a regularization parameter to prevent over-fitting and improve generalization (Krogh and Hertz, 1992) of the final model. The neural networks were trained on the first 280 time points (which accounts for roughly the 77% of the data points) using repeated (10 times) 10-fold cross validation approach. All parameters like different number of neurons \(\boldsymbol{a}\) and learning rate decay parameter \(\boldsymbol{c}\) were optimized via grid search inside the cross validation procedure. Finally, we optimized five final models (i.e., one model for each one of the parsimonious eigenvectors selected) to make one-step predictions in the following way: we give \(\boldsymbol{\psi_{i,t}}\) as input to the FNN in order to make prediction on \(\boldsymbol{\psi_{i+1,t}}\). Using the best candidate model for each one of the parsimonious components \(\boldsymbol{\psi_{t}}\), we predicted iteratively the left out/ unseen test points (i.e., the next 80 unseen time points).
#### 3.3.2 Reduced order models with the Koopman operator
In the Koopman operator framework, predictions are performed in the function space over the data manifold, and so state prediction turns into coordinate function prediction (Mezic, 2005; Budisic et al., 2012; Dietrich et al., 2020). The defining property in this framework is that the operator always acts linearly on its domain, which makes it amenable to spectral analysis and approximation techniques from numerical analysis linear algebra. Given the flow map \(\boldsymbol{f}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) of the embedded dynamics on the manifold, such that \(\boldsymbol{\Psi^{\prime}_{i+1,t}}=\boldsymbol{f}(\boldsymbol{\Psi^{\prime}_{i, t}})\), the Koopman operator \(\boldsymbol{U:\mathcal{F}}\rightarrow\mathcal{F}\) acts on observables \(\boldsymbol{g}\in\mathcal{F}\) by composition with \(\boldsymbol{f}\), such that
\[[\boldsymbol{U}\boldsymbol{g}]\left(\boldsymbol{\Psi^{\prime}_{i,t}}\right)= \boldsymbol{g}(\boldsymbol{f}(\boldsymbol{\Psi^{\prime}_{i,t}}))=\boldsymbol{g }(\boldsymbol{\Psi^{\prime}_{i+1,t}}). \tag{14}\]
If a function \(\boldsymbol{\phi_{j}}\) is an eigenfunction of \(\boldsymbol{U}\) with eigenvalue \(\boldsymbol{\omega_{j}}\), then \([\boldsymbol{U}\boldsymbol{\phi_{j}}]\left(\boldsymbol{\Psi^{\prime}_{i,t}} \right)=\boldsymbol{\phi_{j}}(\boldsymbol{\Psi^{\prime}_{i+1,t}})=\boldsymbol{ \omega_{j}}\boldsymbol{\phi_{j}}(\boldsymbol{\Psi^{\prime}_{i,t}})\). For many systems, eigenfunctions of the Koopman operator span a large subspace in \(\mathcal{F}\), so that many observables \(\boldsymbol{g}\in\mathcal{F}\) can be written as a linear combination of eigenfunctions, \(\boldsymbol{g}=\sum_{\boldsymbol{j}}\boldsymbol{c_{j}}\boldsymbol{\phi_{j}}\), with coefficients \(\boldsymbol{c_{j}}\in\mathbb{C}\). These coefficients are often called "Koopman modes" of the observable \(\boldsymbol{g}\).
The most prevalent numerical algorithm to approximate the Koopman operator is dynamic mode decomposition (DMD) (Schmid, 2010, 2022). It approximates a linear map between the given observables of a system and their
future state, assuming that the observables span a function subspace invariant to the dynamics. In our case, we apply DMD to five of the embedding diffusion map coordinates \(\mathbf{\Psi^{\prime}_{d}}\) (this can be interpreted as a vector-valued observable, i.e., the Koopman operator is acting on five coordinates simultaneously (Budisic et al., 2012)).
### Step 3. Predictions in the Ambient fMRI Space: Solution of the Pre-Image Problem
The final step is to "lift"predictions back to the high dimensional ambient space (here a 111-dimensional space of brain regions), i.e. solve the so-called "pre-image" problem.
For linear manifold learning methods such as the Principal Component Analysis (PCA), this is trivial. However, for non-linear manifold learning algorithms such as DMs, there is no explicit inverse mapping and no unique solution (see Chiavazzo et al. (2014); Papaioannou et al. (2021); Evangelou et al. (2023)) and one has to construct a lifting operator \(\mathbf{\mathcal{L}}=\mathbf{\mathcal{R}}(\mathbf{X})\rightarrow\mathbf{X}\) for new unseen samples on the manifold \(\mathbf{y_{\star}}\notin\mathbf{\mathcal{R}}(\mathbf{X})\).
The problem regarding an unseen data point is often referred to as the "out-of-sample extension" problem. While this problem traditionally addresses the direct problem (i.e., the extension of a function \(\mathbf{g}\) on the ambient space for new unseen data points \(\mathbf{x_{\star}}\notin\mathbf{X}\)), here, we are interested in the inverse problem (i.e., the lifting of predictions made on the manifold back to the ambient space). Here, as discussed, the out-of-sample solution of the pre-image problem is solved using Geometric Harmonics when using FNNs for ROMs, or with Extended Dynamics Mode Decomposition when using the Koopman operator. Below, we present both methods.
#### 3.4.1 Geometric Harmonics
Typically, the out-of-sample extension problem is solved through the Nystrom extension methodology (Nystrom, 1929) derived from the solution of the Fredholm equation of the second kind:
\[\mathbf{f}(\mathbf{t})=\mathbf{g}(\mathbf{t})+\mathbf{\mu}\int_{\mathbf{a}}^{\mathbf{b}}\mathbf{k}(\mathbf{t},\mathbf{s })\mathbf{f}(\mathbf{s})\mathbf{ds}, \tag{15}\]
where \(\mathbf{k}(\mathbf{t},\mathbf{s})\) and \(\mathbf{g}(\mathbf{t})\) are known functions while \(\mathbf{f}(\mathbf{t})\) is the unknown. The Nystrom method approximates the integral
\[\int_{\mathbf{a}}^{\mathbf{b}}\mathbf{y}(\mathbf{s})\mathbf{ds}\approx\sum_{\mathbf{j}-\mathbf{1}}^{\mathbf{N} }\mathbf{w}_{\mathbf{j}}\mathbf{y}(\mathbf{s}_{\mathbf{j}}), \tag{16}\]
where \(\mathbf{N}\) are the collocation points and \(\mathbf{w}\) the weights determined by, for example, the Gauss-Jacobi quadratic rule. Using Eq.16 in 15 and evaluating \(\mathbf{f}\) and \(\mathbf{g}\) at the collocation points \(\mathbf{N}\), we can make the following approximation:
\[(\mathbf{I}-\mathbf{\mu}\mathbf{\tilde{K}})\mathbf{\hat{f}}=\mathbf{g}, \tag{17}\]
where \(\mathbf{\tilde{K}}\)=\(\mathbf{\tilde{k}_{ij}}=\mathbf{k}(\mathbf{s_{i}},\mathbf{s_{j}})\mathbf{w_{j}}\). Solution to the homogenous Fredholm problem (where \(\mathbf{g}=\mathbf{0}\)) is provided through the solution of the eigenvalue problem:
\[\mathbf{\tilde{K}}\mathbf{\hat{f}}=\frac{\mathbf{1}}{\mathbf{\mu}}\mathbf{\hat{f}}, \tag{18}\]
i.e.,
\[\sum_{\mathbf{j}=\mathbf{1}}^{\mathbf{N}}\mathbf{w}_{\mathbf{j}}\mathbf{y}(\mathbf{s_{i}},\mathbf{s_{j}})\mathbf{ \hat{f}}_{\mathbf{j}}=\frac{\mathbf{1}}{\mathbf{\mu}}\mathbf{\hat{f}}_{\mathbf{i}},\qquad \mathbf{i}=\mathbf{1},\mathbf{2},\ldots,\mathbf{N} \tag{19}\]
where \(\mathbf{\hat{f}_{\mathbf{i}}}=\mathbf{\hat{f}}(\mathbf{s_{i}})\) the \(\mathbf{i}\)-th component of \(\mathbf{\hat{f}}\). The Nystrom extension in the full domain for \(\mathbf{N}\) number of collocation points at a point \(\mathbf{x}\) is given by:
\[\mathbf{\mathcal{E}}(\mathbf{f}(\mathbf{x}))=\mathbf{\hat{f}}(\mathbf{x})=\mathbf{\mu}\sum_{\mathbf{j}- \mathbf{1}}^{\mathbf{N}}\mathbf{w}_{\mathbf{j}}\mathbf{k}(\mathbf{x},\mathbf{s_{j}})\mathbf{\hat{f}_{\mathbf{j}}}. \tag{20}\]
The above Eq.(20) provides a map from high dimensional to the reduced order space and back.
Since the eigenvectors of the DMs form a basis for the embedded manifold, the extension is formulated (Coifman and Lafon, 2006) by the expansion of \(\mathbf{f}(\mathbf{x_{i}})\) using the first \(\mathbf{d}\) parsimonious eigenvectors of the diffusion matrix \(\mathbf{P^{t}}\):
\[\mathbf{\hat{f}}(\mathbf{x_{i}})=\sum_{\mathbf{i}=\mathbf{1}}^{\mathbf{d}}\mathbf{a}_{\mathbf{t}}\psi^{ \prime}_{\mathbf{i}}(\mathbf{x_{i}}),\qquad\mathbf{i}=\mathbf{1},\mathbf{2},\ldots,\mathbf{ N}, \tag{21}\]
where \(\mathbf{a_{l}}=\left\langle\psi^{\prime}{}_{l},\mathbf{f}(\mathbf{x_{i}})\right\rangle\), \((\left\langle\cdot,\cdot\right\rangle\) denotes inner product), \(\mathbf{f}\in\mathbb{R}^{N}\) is the vector containing the values of the function \(\mathbf{f}\) at the \(\mathbf{N}\) points. Thus, the Nystrom extension of \(\mathbf{f}\) at a new unseen point \(\mathbf{x_{i}^{\star}}\) using the same coefficients \(\mathbf{a_{k}}\), reads:
\[\mathcal{E}(\hat{\mathbf{f}}(\mathbf{x_{i}^{\star}}))=\sum_{l=1}^{d}\mathbf{a_{l} }\hat{\psi_{l}}(\mathbf{x_{i}^{\star}}), \tag{22}\]
where,
\[\hat{\psi}(\mathbf{x_{i}^{\star}})=\frac{1}{\mathbf{\lambda_{i}^{\star}}}\sum_{j=1 }^{N}\mathbf{k}(\mathbf{x_{i}^{\star}},\mathbf{x_{j}})\psi^{\prime}{}_{l}(\mathbf{ x_{j}}),\qquad l=1,2,\ldots,d, \tag{23}\]
are the geometric harmonics extension of each one of the parsimonious eigenvectors to the unseen data. Thus, for a new set \(\mathbf{X^{\star}}\) of \(\mathbf{n}\) points we get:
\[\mathcal{E}(\hat{\mathbf{f}})=\mathbf{K}_{n\times N}\Psi^{\prime}{}_{N \times d}\mathbf{\Lambda}_{d\times d}^{-1}\Psi^{\prime}{}_{d\times N}^{T}\mathbf{ f}_{N\times 1}, \tag{24}\]
where \(\mathbf{K}\) is the \(\mathbf{n}\times\mathbf{N}\) kernel matrix, \(\mathbf{\Lambda}\) is a \(d\times d\) diagonal matrix that stores the eigenvalues of the eigenvectors.
Here, for the solution of the pre-image problem, we used the approach of "double Diffusion maps" (Evangelou et al., 2023; Papaioannou et al., 2021; Patsatzis et al., 2023)). A pseudocode for the solution of the pre-image problem is given below:
1. **Input:**\(\Psi^{\prime}{}_{d}\in\mathbb{R}^{N\times d}\) matrix whose columns are the \(d\) parsimonious diffusion maps (from Eq.(3.2)).
2. For each new point \(\{\mathbf{y_{i}^{\star}}\}_{i=1}^{n}\) in the DMs embedded space computed using the Nystrom extension (Eq.(22), compute the kernel matrix \(\mathbf{K}\in\mathbb{R}^{n\times N}\) with elements \(\mathbf{k_{i,j}}=\exp\left(-\frac{||\mathbf{y_{i}^{\star}}-\mathbf{y_{j}}||_{2}^{ 2}}{2\sigma}\right)\), \(\mathbf{i}=\mathbf{1},\mathbf{2},\ldots,\mathbf{n}\), \(\mathbf{j}=\mathbf{1},\mathbf{2},\ldots,\mathbf{N}\).
3. **Output:** Reconstructed points in the ambient space using Eq.(24).
#### 3.4.2 Koopman Operator and Extended Dynamic Mode Decomposition (EDMD)
When using the Koopman operator approach, the DMs coordinates considered as a truncated basis of a function space over the original measurements, can be used effectively similarly as when using Dynamic Mode Decomposition (DMD) (see Williams et al., 2015), but on the original measurements in the ambient fMRI space.
Thus, unlike standard DMD, in our case, the Koopman modes are computed for the original coordinates, not the DM eigenfunctions, so that we obtain a linear map from Koopman eigenfunctions to original measurements. A compact version of the EDMD algorithm we use is as follows:
1. **Input:** Diffusion maps coordinates \(\mathbf{\Psi^{\prime}_{i,d}}\in\mathbb{R}^{N\times d}\) at time steps \(\mathbf{i}=\mathbf{1},\ldots,\mathbf{N}\) from Section 3.2.
2. Approximate the Koopman matrix \(\hat{\mathbf{U}}=\mathbf{\Psi^{\prime}_{1}}\Psi^{\prime}{}_{-1}\in\mathbb{R}^{d \times d}\), where \(\dagger\) denotes the pseudo-inverse and \(\mathbf{\Psi^{\prime}_{-}}=\left[\mathbf{\Psi^{\prime}_{1,d}},\mathbf{\Psi^{\prime}_{2,d}},\ldots,\mathbf{\Psi^{\prime}_{N-1,d}}\right]\in\mathbb{R}^{d\times(N-1)}\) and \(\mathbf{\Psi^{\prime}_{+}}=\left[\mathbf{\Psi^{\prime}_{2,d}},\mathbf{\Psi^{\prime}_{3,d}},\ldots,\mathbf{\Psi^{\prime}_{N,d}}\right]\in\mathbb{R}^{d\times(N-1)}\) are time shifted data matrices with \(\mathbf{\Psi^{\prime}_{i,d}}\in\mathbb{R}^{d}\) as columns.
3. Compute all eigenvectors \(\mathbf{\hat{\phi}_{j}}\in\mathbb{R}^{d}\) and eigenvalues \(\mathbf{\hat{\omega}_{j}}\) of \(\hat{\mathbf{U}}\) by solving \(\hat{\mathbf{U}}\mathbf{\hat{\phi}_{j}}=\mathbf{\hat{\omega}_{j}}\mathbf{\hat{\phi}_{j}},\mathbf{j}= \mathbf{1},\mathbf{2},\ldots,\mathbf{d}\).
4. Obtain the Koopman modes \(\mathbf{c_{j}}\in\mathbb{R}^{M}\) for the original measurements \(\mathbf{x_{i}}\in\mathbb{R}^{M}\), by solving the linear systems \(\mathbf{x_{i}}=\sum_{j}\mathbf{c_{j}}\mathbf{\hat{\phi}_{i,j}}\) with data from all available time steps \(\mathbf{i}=\mathbf{1},\ldots,\mathbf{N}\) simultaneously.
5. **Output:** Eigenvectors \(\mathbf{\hat{\phi}_{j}}\), eigenvalues \(\mathbf{\hat{\omega}_{j}}\), and Koopman modes \(\mathbf{c_{j}}\) for the original measurements.
6. **Prediction:** To obtain an approximation of \(\mathbf{x_{i+1}}\), we evaluate \(\hat{\mathbf{x_{i+1}}}=\hat{\mathbf{U}}\sum_{\mathbf{j}}\mathbf{c_{j}}\mathbf{\hat{\phi}_{i,j}} =\sum_{j}\mathbf{\hat{\omega}_{j}}\mathbf{c_{j}}\mathbf{\hat{\phi}_{i,j}}\).
## 4 Results
For our computations, we have used the Python packages Datafold (Lehmberg et al., 2020) with scikit-learn (Pedregosa et al., 2011). For the implementation of the FNNs we utilized the R package "nnet" (Ripley et al., 2016).
### Data Preprocessing
Our analysis starts with the data-preprocessing using GLM. In Figure 2, the thresholded image for the activation during the contrast "attention" + "non-attention" + "stationary" vs "fixation" is presented as an overlay on the brain extracted structural T1 MNI image (2mm space). We considered activations of clusters of 10 or more voxels with two different levels of significance: A) \(\mathbf{p<0.05}\) with Family Wise Error correction (FWE) and B) \(\mathbf{p<0.001}\) (uncorrected). In Table 1, the results considering the more stringent threshold of \(\mathbf{p<0.05}\) (FWE corrected) are presented in detail. We report the local maximum of each cluster, its size in number of voxels, the brain regions that contribute to each cluster and finally the percentage of the contribution. Similarly, in Table 2 the results concerning the activations of clusters using a more liberal threshold of \(\mathbf{p<0.001}\) (uncorrected) are reported. The brain regions that showed activations were mainly parts of the Occipital Lobe such as the Calcarine, Cuneus, Occipital gyrus, the Fusiform gyrus and parts of the Cerebellum. When a more liberal threshold was applied (\(\mathbf{p<0.001}\)), the clusters would naturally get larger. Consequently, two more regions appeared to have clusters of activation, namely, the Postcentral gyrus and the inferior parietal cortex. Focusing on these "active" regions, we then proceeded to the prediction of their behaviour through macroscopic variables extracted by the Diffusion Maps.
Figure 2: Thresholded image of the contrast “attention” + “non-attention” + “stationary” vs “fixation” is presented as an overlay on the reference structural MRI (T1) image.A) Level of significance p<0.05 (FWE corrected), B) \(\mathbf{p<0.001}\) (uncorrected).
\begin{table}
\begin{tabular}{l l l l} \multicolumn{3}{c}{Level of significance p \(<\) 0.05 (FWE corrected)} \\ \hline Local maximum x,y,z mm & Number of voxels in the cluster & Region Label & \% in the cluster \\ \hline
3, -93, 0 & 191 & Calarine L & 11.96 \\ & 191 & Cuneus L & 10.17 \\ & 191 & Occipital Sup L & 8.89 \\ & 191 & Calcarine R & 4.72 \\ & 191 & Occipital Mid L & 0.21 \\ & 191 & Lingual R & 0.15 \\ \hline -33, -84, -21 & 28 & Cerebellum Crus1 L & 39.29 \\ & 28 & Lingual L & 39.29 \\ & 28 & Fusiform L & 17.86 \\ & 28 & Occipital Inf. L & 3.57 \\ \hline
21, -66, -3 & 20 & Lingual R & 100 \\ \hline
27, -81, -18 & 11 & Cerebellum 6 R & 45.45 \\ & 11 & Cerebellum Crus1 R & 27.27 \\ & 11 & Fusiform R & 18.18 \\ & 11 & Lingual R & 9.09 \\ \end{tabular}
\end{table}
Table 1: Clusters of 10 or more activated voxels for the contrast “attention” + “non-attention” + “stationary” vs “fixation” using SPM. The level of significance was set to \(\boldsymbol{p<0.05}\) (FWE corrected). For each cluster, the coordinates of the local maximum (in standard MNI coordinates) is shown along with the total number of voxels in the cluster and the brain regions that these voxels are found. Finally, the percentage of a region’s voxels in each cluster is also presented.
### Manifold Learning with Diffusion Maps
Manifold learning with Diffusion maps was applied on the first 280 (out of the whole 360 time points) time points of the 111 linearly detrended time series. The last 80 points were left out to evaluate later the prediction error based on ROMs and the "lifting" process. In Figure 3, we present the eigenspectrum of the DMs decomposition. The red vertical line shows the number of the extracted eigenvectors (more eigenvectors would not affect the final outcome). The five most parsimonious DMs components that we used for further analysis are marked with red arrows, namely \(\mathbf{\psi_{1}}\), \(\mathbf{\psi_{5}}\), \(\mathbf{\psi_{8}}\), \(\mathbf{\psi_{13}}\) and \(\mathbf{\psi_{15}}\). As it can be seen in Figure 3, after the first 30 eigenvalues, most of the variance has already been captured. In other words, consequent eigenvectors have a very small contribution to the final embedding. The parameters we actually set in our computations are \(\mathbf{t}=\mathbf{0}\), \(\mathbf{\sigma}=\mathbf{65}\), \(\mathbf{\alpha}=\mathbf{1}\), \(\mathbf{\kappa}=\mathbf{30}\) and \(\mathbf{d}=\mathbf{5}\).
### Construction of Reduced Order Models and Predictions on the Manifold
Out-of-sample predictions on the embedded manifold were based on the 5 parsimonious DMs, namely \(\mathbf{\psi_{1}}\), \(\mathbf{\psi_{5}}\), \(\mathbf{\psi_{8}}\), \(\mathbf{\psi_{13}}\) and, \(\mathbf{\psi_{15}}\) are shown in Figure 4. Based on the these, we constructed ROMs using FNNs and the Koopman operator as described in subsection 3.3.
The ROMs were trained via one time step ahead predictions using the first 280 time points, which account to the 77% of the total data points. Validation was done using 10-fold cross validation, repeated 10 times. Thus, the performance of the ROMs was assessed by simulating the trained ROMs iteratively: setting the initial conditions at the time point 280, we iterated the ROMs to produce the next 80 points (i.e. up to the 360th time point).
\begin{table}
\begin{tabular}{l l l l} & \multicolumn{2}{c}{Level of significance p \textless{} 0.001 (uncorrected)} \\ \hline Local maximum (x,y,z mm) & Number of voxels in the cluster & Region Label & \% in the cluster \\ \hline
3, -93, 0 & 917 & Calcarine L & 20.17 \\ & 917 & Occipital Mid L & 13.52 \\ & 917 & Lingual R & 12.65 \\ & 917 & Occipital Sup L & 11.01 \\ & 917 & Calcarine R & 10.58 \\ & 917 & Fusiform R & 10.25 \\ & 917 & Cuneus L & 7.96 \\ & 917 & Lingual L & 6 \\ & 917 & Cerebellum Crus1 R & 2.94 \\ & 917 & Cerebellum 6 R & 1.74 \\ & 917 & Cuneus R & 1.64 \\ & 917 & Occipital Inf. L & 1.2 \\ & 917 & Occipital Sup R & 0.22 \\ & 917 & Occipital Inf. R & 0.11 \\ \hline -33, -84, -21 & 68 & Cerebellum Crus1 L & 44.12 \\ & 68 & Lingual L & 25 \\ & 68 & Fusiform L & 19.12 \\ & 68 & Occipital Inf. L & 11.76 \\ \hline -39, -90, -3 & 10 & Occipital Mid. L & 90 \\ & 10 & Occipital Inf. L & 10 \\ \hline
39, -39, 54 & 15 & Parietal Inf. R & 80 \\ & 15 & Postcentral R & 20 \\ \hline -33, -69, -18 & 35 & Fusiform L & 94.29 \\ & 35 & Cerebellum 6 L & 2.86 \\ & 35 & Lingual L & 2.86 \\ \hline
27, -90, 20 & 11 & Occipital Sup. R & 72.73 \\ & 11 & Occipital Mid.R & 27.27 \\ \end{tabular}
\end{table}
Table 2: Clusters of 10 or more activated voxels for the contrast “attention” + “non-attention” + “stationary” vs “fixation” using SPM. The level of significance was set to p < 0.001 (uncorrected). For each cluster, the coordinates of the local maximum (in standard MNI coordinates) is shown, along with the total number of voxels in the cluster and the brain regions that these voxels are found. Finally, the percentage of a region’s voxels in each cluster is also presented.
For the FNNs, we used the logistic function as the activation function for all neurons in the hidden layer and a learning rate decay parameter or regularization to prevent over-fitting and improve generalization of the final model. Parameter tuning was done via grid search during the repeated cross validation process.
### Predictions in the fMRI space and comparison with the Naive Random Walk Model
At the final step, the out-of-sample predictions on the manifold made by FNNs and the Koopman operator were "lifted" back to the original space using GH and Koopman modes per se, respectively (see 3.4). Specifically, the predicted values were of size \(\mathbf{80\times d=5}\) (reduced space) and the lifting of those predictions led to a new set of size \(\mathbf{80\times 111}\) (original space). In Figure 5, we depict those predictions overlaid on the unseen (out-of-sample) test data (red color) for the two approaches, namely for the FNN-GH and the Koopman operator. Indicatively, we show the first four regions of interest (ROIs) corresponding to the biggest activated clusters (see Table 2), specifically, the left Calcarine sulcus (A), the left Middle Occipital gyrus (B), the right Lingual gyrus (C) and the left Superior Occipital gyrus (D).
In Table 3, we present the errors on the test set for each ROIs as derived for each of the two schemes (FNN- GH and the Koopman operator).
To assess the prediction efficiency of the proposed methodology, we also provide the errors obtained when applying the naive random walk (NRW) model in a direct way (the predictions at time \(\boldsymbol{i+1}\) are the last observed values at time \(\boldsymbol{i}\)). Hence, to predict the values of the DMs at the time point \(\boldsymbol{i+1}\) of the test set with the NRW model, we assumed that _we know the actual values of the DMs at the time \(\boldsymbol{i}\) of the test set_. The errors are reported in terms of the RMSE and the \(\boldsymbol{L_{2}}\) norm; the smallest errors are marked with bold. In general, the two strategies behave similarly well, outperforming the NRW predictions for all ROIs except one, the Cerebelum Crus 1 on the right hemisphere. This is important, revealing the efficiency of the proposed approach, since as discussed, the predictions of the ROMs were produced iteratively, i.e. _without any knowledge of the values in the test set_, while the predictions made with the NRW model _were based on the knowledge of the values of the previous points of the test set_.
Here, we should note that predictions made for some brain regions, like the Cuneus of the right hemisphere, might seem to exhibit a poor or spurious predictions. This is due to the fact that in this particular case, the volume of voxels that are "activated" (only for the uncorrected threshold of \(\boldsymbol{p<0.001}\)) is found to contribute only 1.64 % in
Figure 3: The eigenspectrum of DMs on the 111 time series that correspond to different brain regions. The red vertical line indicates how many eigenvectors we computed. The five parsimonious eigenvectors that were finally used as macroscopic variables are also marked with red arrows.
the activated cluster (see Table 2). Since in this study we try to predict the behaviour of a whole brain region, each time series is averaged over all voxels of the predefined region (here, using AAL). Thus, when trying to predict the "average" behaviour of voxels that constitute a region when only a small proportion of them is actually "activated", can't be reliable. Therefore, we would normally expect the average signal of such a region to be noisy and without a pattern which could be attributed to the task imposed to the subject (here, the attention to visual motion).
Figure 4: Predictions based on the 5 parsimonious eigenvectors (namely \(\mathbf{\psi_{1},\psi_{5},\psi_{8},\psi_{13}}\) and \(\mathbf{\psi_{15}}\)) when applying FNNs and the Koopman operator. Actual points for each one of the DMs are presented with black color up to the 280th point, which is the last point of the training set. The predicted values are presented with light green color up to the end of the time series (as derived iteratively by the ROMs from the point 281 to the end.).
## 5 Discussion
Our work addresses a three-step machine-learning based approach for the modelling of brain activity based on task-depended fMRI data in order to deal with the curse of dimensionality. In the first step, we used parsimonious Diffusion Maps to create a basis that spans a low dimensional subspace, a manifold, in which the information of the brain activity is retained. In a second step, we trained ROMs using this low-dimensional basis, thus relaxing the curse of dimensionality, using both FNNs and the Koopman operator. Finally, we predicted brain activity in the fMRI space by solving the pre-image problem using Geometric Harmonics and the Koopman modes. By doing so, we demonstrated that for the particular benchmark task-experiment, the brain response to the attention to visual motion task is contained in a 5-dimensional manifold. By lifting our predictions to the ambient fMRI space, we found that the key brain regions during attention to visual motion are actually four: the Calcarine Sulcus, the Cuneus, the Superior Occipital Gyrus and Lingual Gyrus. These regions are the same regions reported also in (Buchel et al., 1998) "re-discovered" by our methodology.
Our proof-of-concept study is the first to propose such a machine-learning framework to model and importantly accurately predict brain activity from task-based fMRI data, thus facing challenges that are not met when producing controlled experiments from synthetic data produced just by model simulations.
In the numerical experiments detailed above, we show that the performance of the two schemes FNNs-GH and the Koopman operator is comparable, while both outperform the naive random walk whose predictions assume the knowledge of brain activity at the previous time step. Importantly, the comparison between the two schemes reveals that it is not always necessary to construct a non-linear embedding (here, with Diffusion Maps) and then also construct surrogate non-linear models using, e.g., FNNs to predict future states. Effectively, one can utilize the low-frequency truncation of the function space of square-integrable functions (\(\mathbf{L^{2}}\)) over the original data, as obtained with Diffusion Maps, within the Koopman operator framework, to predict the entire list of coordinate functions in a linear fashion. Conversely, the Neural Network approach treats the embedding coordinates as a base space for the dynamics and predicts them point-wise.
Figure 5: Predictions in the amplitude of the BOLD signal in the original fMRI space for four of the most “activated” regions as found by the classical GLM methodology A) left Calcarine Sulcus, B) left Middle Occipital Gyrus, C) right Lingual Gyrus, D) left Superior Occipital Gyrus. The red color marks the actual values of the test set, while the other colors correspond to the predictions based on the proposed methodology.
We believe that the proposed methodology may trigger further developments in the field, thus providing the base for a general framework for modelling the dynamics of brain activity, from high-dimensional time series, including also the solution of the source localization problem combining for example simultaneous EEG-fMRI recordings.
## 6 Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## 7 Acknowledgements
F.D. was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project no. 468830823.
|
2310.13075 | On the Computational Complexities of Complex-valued Neural Networks | Complex-valued neural networks (CVNNs) are nonlinear filters used in the
digital signal processing of complex-domain data. Compared with real-valued
neural networks~(RVNNs), CVNNs can directly handle complex-valued input and
output signals due to their complex domain parameters and activation functions.
With the trend toward low-power systems, computational complexity analysis has
become essential for measuring an algorithm's power consumption. Therefore,
this paper presents both the quantitative and asymptotic computational
complexities of CVNNs. This is a crucial tool in deciding which algorithm to
implement. The mathematical operations are described in terms of the number of
real-valued multiplications, as these are the most demanding operations. To
determine which CVNN can be implemented in a low-power system, quantitative
computational complexities can be used to accurately estimate the number of
floating-point operations. We have also investigated the computational
complexities of CVNNs discussed in some studies presented in the literature. | Kayol Soares Mayer, Jonathan Aguiar Soares, Ariadne Arrais Cruz, Dalton Soares Arantes | 2023-10-19T18:14:04Z | http://arxiv.org/abs/2310.13075v1 | # On the Computational Complexities of Complex-valued Neural Networks
###### Abstract
Complex-valued neural networks (CVNNs) are nonlinear filters used in the digital signal processing of complex-domain data. Compared with real-valued neural networks (RVNNs), CVNNs can directly handle complex-valued input and output signals due to their complex domain parameters and activation functions. With the trend toward low-power systems, computational complexity analysis has become essential for measuring an algorithm's power consumption. Therefore, this paper presents both the quantitative and asymptotic computational complexities of CVNNs. This is a crucial tool in deciding which algorithm to implement. The mathematical operations are described in terms of the number of real-valued multiplications, as these are the most demanding operations. To determine which CVNN can be implemented in a low-power system, quantitative computational complexities can be used to accurately estimate the number of floating-point operations. We have also investigated the computational complexities of CVNNs discussed in some studies presented in the literature.
Complex-valued Neural Networks, Low-power Systems, Quantitative Computational Complexity, Asymptotic Computational Complexity
## I Introduction
Since the first steps of artificial neural models, a significant number of artificial neural network (ANN) architectures and learning methods have been proposed [1, 2, 3]. Interestingly, among these artificial neural networks, scarce attention is paid to the class of complex-valued neural networks (CVNNs) [4]. Unlike real-valued neural networks (RVNNs), CVNNs are capable of directly handling complex inputs and outputs [5]. As a result, CVNNs should be the natural choice for processing complex-valued signals, and they should also be explored for real-valued applications. Take for instance the XOR problem, derived from the two-dimensional "AND/OR" theorem. A single real-valued perceptron is unable to learn the XOR function. To solve the XOR problem, a three-layer RVNN is necessary at the very least. However, Minsky and Papert's limitation can be circumvented using only a single complex-valued neuron [5]. Yet, the use of a single complex-valued neuron is not the only motivation; with CVNN architectures, it's possible to enhance the functionality of neural networks, improve their performance, and reduce training time compared to RVNNs [6, 7]. Furthermore, it was recently proven by Voigtlaender [8] that CVNNs also adhere to the universal approximation theorem.
For real-time systems, CVNNs have recently been implemented in photonic integrated circuits as an optical neural chip that obtained faster convergence and higher accuracy compared with RVNNs [9]. Not only in optical neural chips, CVNNs can also be efficiently implemented in graphics processing units (GPUs) and tensor processing units (TPUs) with matrix structures and field programmable gate arrays (FP-GAs) with systolic arrays [10]. Additionally, with the development of adaptive computing platforms, such as PYNQ from Xilimx [11], CVNNs can be easily implemented in hardware using open-source Python libraries (e.g., RosenPy, developed by Cruz et al. [12]).
In many current applications, the most demanding algorithms are usually centralized in base stations with significant computational power. However, new technologies claim for desegregation, such as the Internet of Things (IoT), smart homes, and Industry 4.0, where a significant number of intelligent sensors are necessary [13]. Then, computational complexity analysis is crucial to choose the best approach for low-power systems.
For digital communication systems, CVNNs have also presented promising results for telecommunications, such as channel estimation and equalization, beamforming, detection, and decoding [14, 15, 16, 17, 18, 19, 20, 21]. Liu et al. [14] proposed a CVNN based on extreme learning machines for channel estimation and equalization for OFDM systems. Enriconi et al. [15] demonstrated the beamforming tracking performance of a shallow phase transmittance radial basis function (PT-RBF) neural network under a dynamic military channel. Mayer et al. [16] employed a modified PT-RBF for transmitting beamforming, including the array currents into the CVNN architecture. Soares et al. [17] implemented a joint channel estimation and decoding for massive-MIMO communications using a shallow PT-RBF. Xu et al. [18] applied deep convolutional CVNNs for raw IQ signal recognition, achieving improved accuracy with lower computation complexity compared with RVNNs. Mayer et al. [19] compared some CVNN architectures for receiver beamforming operating with multiple users and interferences. Chu et al. [20] proposed
a channel estimation technique using a CVNN for optical systems operating with filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM). Soares et al. [21] proposed two inference learning approaches for channel estimation and decoding with CVNNs under highly dynamic channels.
In the literature, some CVNN computational complexities are addressed depending on the system implementation. In [22], the computational complexity of a shallow PT-RBF is presented in terms of a concurrent equalizer and a fuzzy controller. In [17], the computational complexity of a shallow PT-RBF is described as a function of the MIMO communication architecture. To the best of our knowledge, there is no work comparing the computational complexities of CVNNs, such as complex-valued feedforward NN (CVFNN) [23], split-complex feedforward NN (SCFNN) [24], multilayer feedforward NN based on multi-valued neurons (MLMVN) [25], complex-valued radial basis function (C-RBF) [15], fully complex-valued radial basis function (FC-RBF) [26], and PT-RBF [19].
This paper is an extension of Kayol S. Mayer's Ph.D. Thesis [4], developed at the School of Electrical and Computer Engineering, Universidade Estadual de Campinas, in the area of Telecommunications and Telematics. In this context, this paper presents the quantitative and asymptotic computational complexities of the mentioned CVNNs in a comprehensive way, regardless of any specific application.
The remainder of this paper is organized as follows. Section II presents a brief discussion on CVNNs. Section III describes the quantitative and asymptotic computational complexities of CVNNs. In Section IV, we discuss the computational complexities of CVNNs proposed in the literature. Lastly, Section V concludes the paper.
## II Complex-valued Neural Networks
One of the most studied CVNNs in the literature is the complex-valued feedforward neural network, a multilayer perceptron without feedback among layers in the forward step, adapted to directly process data in the complex domain [23]. CVFNNs can operate with fully-complex transcendental activation functions that satisfy the Cauchy-Riemann equations with relaxed conditions, such as circular, inverse circular, hyperbolic, and inverse hyperbolic functions. Also, an important and particular case of CVFNNs is the SCFNN, in which real and imaginary components are processed separately by holomorphic functions (i.e., analytic functions) in \(\mathbb{R}\)[24]. With similar architecture, but utilizing phase mappings onto unit circles as activation functions, the MLMVN is another relevant CVNN. In the MLMVN, the backpropagation algorithm is performed only using the multi-valued neurons error since no derivative is necessary because it is impossible to move in incorrect directions [25].
Based on a different CVNN architecture, the C-RBF neural network can also operate with complex numbers [15]. Due to the C-RBF phase vanishing into the Euclidean norm of Gaussian neurons, Savitha et al. [27] proposed the FC-RBF neural network, where \(\mathrm{sech}(\cdot)\) activation functions map \(\mathbb{C}^{N}\mapsto\mathbb{C}\) with Gaussian-like characteristics. Considering split-complex Gaussian neurons to circumvent any phase issue [22], Loss et al. [28] proposed the shallow and multiple-input single-output (MISO) PT-RBF. Recently, the PT-RBF has been extended to multiple outputs [17] and multiple layers [19].
In communication systems, the choice of architecture for complex-valued neural networks (CVNNs) can significantly affect performance. Notably, as detailed in Mayer [4], our research has shown that RBF-based architectures consistently outperform other CVNN architectures, especially in communication-related tasks such as channel equalization, beamforming, channel estimation, and decoding. This superior performance can be attributed to the inherent characteristics of RBF-based neural networks that make them well-suited for handling additive white Gaussian noise (AWGN), a common feature in communication systems. This phenomenon can be understood by the similarity between the activation functions of RBF-based CVNNs and the distribution function of AWGN noise -- a distinction not found in other CVNNs like CVFNN, SCFNN, and MLMVN. This effect becomes more pronounced in challenging scenarios with lower signal-to-noise ratios (SNRs). However, this does not apply to FC-RBF, which becomes unstable in noisy situations. Further insights regarding performance and parameter estimation are available in [4, 19, 21].
## III Computational Complexities
### _Quantitative computational complexities_
In order to estimate the computational complexity of an algorithm, one of the more straightforward and effective strategies is the mathematical operations analysis. Based on the CVFNN, SCFNN, MLMVN, C-RBF, FC-RBF, and PT-RBF architectures proposed in the literature, the mathematical operations are summarized into additions, multiplications, and activation functions. Although activation functions encompass a set of nonlinear functions that seem burdensome at first glance, they are not considered in our analysis since lookup tables can efficiently implement them [29]. For all CVNN architectures, the number of additions and multiplications are similar; thus, since the latter is much more demanding, additions are also not taken into account. For a complimentary analysis of the number of additions and activation functions of CVNNs, see [4].
The computational complexities of CVFNN, SCFNN, MLMVN, C-RBF, FC-RBF, and PT-RBF are assessed based on the number of inputs \(P\), outputs \(R\), layers \(L\), and complex-valued artificial neurons per hidden layer \(N\). Therefore, setting the number of inputs, outputs, and layers with neurons, we obtain the CVNNs computational complexities, depicted
in Tables I and II for shallow and deep CVNNs, respectively. In Table II, each CVNN layer is composed of \(I^{\{l\}}\) complex-valued neurons for \(l\in[1,\,2,\,\cdots,\,L-1]\), except for the input layer where \(I^{\{l=0\}}=P\) and the output layer where \(I^{\{L\}}=R\). Furthermore, for the deep PT-RBF, the number of bottleneck outputs is \(O^{\{l\}}\). It is important to notice that the C-RBF and FC-RBF are not taken into account in Table II because they are only proposed for shallow architectures.
### _Asymptotic computational complexities_
We assume that the neural networks have \(P\) inputs, \(R\) outputs, \(L\) hidden layers for deep CVNNs, and \(N\) neurons per layer. For the deep PT-RBF, the number of bottleneck outputs is equal to the number of neurons per layer, i.e., \(I^{\{l\}}=O^{\{l\}}=N\) for \(l\in[1,\,2,\,\cdots,\,L-1]\), except for the output layer where \(O^{\{L\}}=R\). The asymptotic computational complexities of shallow and deep CVNNs, based on Tables I and II, are depicted in Table III. In terms of asymptotic computational complexities, both training and inference have identical results, which is why the operation mode is not addressed in Table III. As the C-RBF and FC-RBF were only proposed for shallow architectures, their complexities are not specified for deep CVNNs.
From Table III, for shallow CVNNs with a number of neurons much lower than the number of inputs and outputs, i.e., first column, the computational complexities are asymptotically linear. However, for shallow CVNNs with a number of neurons proportional to the number of inputs and outputs, i.e., second column, and deep CVNNs with a number of neurons per layer much higher than the number of layers, i.e., third column, the computational complexities are asymptotically quadratic. Nevertheless, the computational complexities are asymptotically cubic for deep CVNNs with a number of neurons per layer proportional to the number of layers, i.e., the fourth column.
Relying on Table III asymptotic analysis, as shallow neural
networks are usually designed with more neurons than inputs and outputs, thus shallow CVNNs have linear computational complexity when increasing the number of neurons. Notwithstanding, deep CVNNs have quadratic computational complexity with increasing the number of neurons because conventional deep neural networks operate with more neurons than hidden layers.
## IV Use cases
To provide readers with a clear understanding, we present the computational complexities of CVNNs for some recent applications in communication systems proposed in the literature. Table IV depicts the computational complexities of CVNNs for MIMO channel estimation and decoding [17], FBMC/OQAM channel estimation in intensity modulation direct detection (IM/DD) [20], beamforming receivers with multiple users [19], and OFDM channel estimation and signal detection [30]. The computational complexities of training and inference have been computed using the equations presented in Tables I and II. The CVNN architectures were determined based on the descriptions presented in each referenced work. For instance, in [19], the CVNNs were designed with six inputs, three outputs, and \(100\) neurons. For comparison purposes, if the referenced work only discussed one CVNN architecture or exclusively employed RVNNs, we considered the number of inputs, outputs, layers, and neurons as parameters to calculate the equivalent computational complexity for the CVNNs. On the other hand, if the referenced work only considered deep architectures, we took into account the equivalent number of neurons (i.e., the sum of all neurons) to compute the computational complexity of the shallow CVNNs, specifically C-RBF and FC-RBF.
In Table IV, we observe that C-RBF achieved lower computational complexities in most of the applications, with the exception of [20]. When considering deep CVNN architectures, the PT-RBF presented higher computational complexity, as seen in [19, 30]. On the other hand, perceptron-based CVNNs exhibit intermediate computational complexity. Based on these results, we could recommend C-RBF as the primary choice for low-power communication systems due to its lower complexity and satisfactory performance in noisy scenarios. However, in more demanding applications such as those required in base stations, the PT-RBF could also be employed, but at the cost of increased computing resources.
## V Conclusion
This paper offers a comprehensive analysis of the computational complexities associated with various complex-valued neural network (CVNN) architectures, including CVFNN, SCFNN, MLMVN, C-RBF, FC-RBF, and PT-RBF. Beyond simply cataloging these complexities, our work provides valuable technical insights that can guide both researchers and practitioners in the field. One of the key contributions of our analysis is the elucidation of how the asymptotic computational complexities of CVNNs evolve in relation to their architectural parameters, such as the number of inputs, outputs, neurons, and layers. By understanding these trends, practitioners can make informed decisions when selecting a CVNN architecture that aligns with their computational resource constraints. Moreover, our research goes beyond mere theoretical analysis. We demonstrate the practical utility of our findings by showcasing how quantitative computational complexities can be harnessed to accurately estimate the number of floating-point operations required for implementing CVNNs in communication systems. This insight empowers engineers and system designers to make informed choices when optimizing CVNNs for real-world applications, ultimately enhancing their efficiency and effectiveness.
## Acknowledgments
This work was supported in part by the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior -- Brasil (CAPES) -- Finance Code 001.
|
2306.13793 | QNNRepair: Quantized Neural Network Repair | We present QNNRepair, the first method in the literature for repairing
quantized neural networks (QNNs). QNNRepair aims to improve the accuracy of a
neural network model after quantization. It accepts the full-precision and
weight-quantized neural networks and a repair dataset of passing and failing
tests. At first, QNNRepair applies a software fault localization method to
identify the neurons that cause performance degradation during neural network
quantization. Then, it formulates the repair problem into a linear programming
problem of solving neuron weights parameters, which corrects the QNN's
performance on failing tests while not compromising its performance on passing
tests. We evaluate QNNRepair with widely used neural network architectures such
as MobileNetV2, ResNet, and VGGNet on popular datasets, including
high-resolution images. We also compare QNNRepair with the state-of-the-art
data-free quantization method SQuant. According to the experiment results, we
conclude that QNNRepair is effective in improving the quantized model's
performance in most cases. Its repaired models have 24% higher accuracy than
SQuant's in the independent validation set, especially for the ImageNet
dataset. | Xidan Song, Youcheng Sun, Mustafa A. Mustafa, Lucas C. Cordeiro | 2023-06-23T21:40:24Z | http://arxiv.org/abs/2306.13793v3 | # QNNRepair: Quantized Neural Network Repair
###### Abstract
We present QNNRepair, the first method in the literature for repairing quantized neural networks (QNNs). QNNRepair aims to improve the accuracy of a neural network model after quantization. It accepts the full-precision and weight-quantized neural networks, together with a repair dataset of passing and failing tests. At first, QNNRepair applies a software fault localization method to identify the neurons that cause performance degradation during neural network quantization. Then, it formulates the repair problem into a linear programming problem of solving neuron weights parameters, which corrects the QNN's performance on failing tests while not compromising its performance on passing tests. We evaluate QNNRepair with widely used neural network architectures such as MobileNetV2, ResNet, and VGGNet on popular datasets, including high-resolution images. We also compare QNNRepair with the state-of-the-art data-free quantization method SQuant [20]. According to the experiment results, we conclude that QNNRepair is effective in improving the quantized model's performance in most cases. Its repaired models have 24% higher accuracy than SQuant's in the independent validation set especially for the ImageNet dataset.
Keywords:neural network repair, quantization, fault localization, constraints solving
## 1 Introduction
Nowadays, neural networks are often used in safety-critical applications, such as autonomous driving, medical diagnosis, and aerospace systems [47]. In such applications, often quantized (instead of full precision) neural network models are deployed due to the limited computational and memory resources of embedded devices [21]. Since the consequences of a malfunction/error in such applications can be catastrophic, it is crucial to ensure that the network behaves correctly and reliably [36].
Quantized neural networks [21] use low-precision data types, such as 8-bit integers, to represent the weights and activations of the network. While this reduces the memory and computation requirements of the network, it can also
lead to a loss of accuracy and the introduction of errors in the network's output. Therefore, it is important to verify that the quantization process has not introduced any significant errors that could affect the safety or reliability of the network.
To limit the inaccuracy in a specific range, various neural network model verification methods [13][28][27][42][23] have been proposed. Neural network verification [24, 34, 37] aims to provide formal guarantees about the behavior of a neural network, ensuring that it meets specific safety and performance requirements under all possible input conditions. They set constraints and properties of the network input and output to check whether the model satisfies the safety properties. However, neural network verification can be computationally expensive, especially for large, deep networks with millions of parameters. This can make it challenging to scale the verification process to more complex models. While the majority of the neural network verification work is on full precision models, many verification techniques focus on quantized models as well [23, 46, 17, 4].
Other researchers improve the performance and robustness of the trained neural network models by repair [45][18][40][39][6]. These methods can be divided into three categories: retraining/refining, direct weight modification, and attaching repairing units. There are also quantized aware training (QAT) techniques [30][19][7], a method to train neural networks with lower precision weights and activations, typically INT8 format. QAT emulates the effects of quantization during the training process. QAT requires additional steps, such as quantization-aware back-propagation and quantization-aware weight initialization, making the training process more complex and time-consuming. Quantized-aware training methods require datasets for retraining, which consume a lot of time and storage. However, for Data-free quantization like SQuant [20], which does not require datasets, the accuracy after quantization is relatively low.
In QNNRepair, we use the well-established software fault localization methods to identify suspicious neurons in a quantized model corresponding to the performance degradation after quantization. We then correct these most suspicious neurons' behavior by linear programming, in which the constraints are encoded by observing the difference between the quantized model and the original model when inputs are the same. The main contributions of this paper are three-fold:
* a new method for repairing QNNs. It converts quantized neural network repair into an LP (linear programming) problem. QNNRepair features direct weight modification and does not require the training dataset.
* Squant [20], and demonstrate that QNNRepair can achieve higher accuracy than Squant after repair. We also evaluate QNNRepair on multiple widely used neural network architectures to demonstrate its effectiveness.
* We have made QNNRepair and its benchmark publicly available at: [https://github.com/HymnOfLight/QNNRepair](https://github.com/HymnOfLight/QNNRepair)
Preliminaries
### Statistical Fault Localization
Statistical fault localization techniques (SFL) [31] have been widely used in software testing to aid in locating the causes of failures of programs. During the execution of each test case, data is collected indicating the executed statements. Additionally, each test case is classified as passed or failed.
This technique uses information about the program's execution traces and associated outcomes (pass/fail) to identify suspicious program statements. It calculates four suspiciousness scores for each statement based on the correlation between its execution and the observed failures. We use the notation \(C^{\mathrm{af}},C^{\mathrm{nf}},C^{\mathrm{as}},C^{\mathrm{ns}}\). The first part of the superscript indicates whether the statement was executed/"activated" (a) or not (n), and the second indicates whether the test is a passing/successful (s) or failing (f) one. For example, \(C^{\mathrm{as}}\) is the number of successful tests that execute a statement \(C\). Statements with higher suspiciousness scores are more likely to contain faults. There are many possible metrics that have been proposed in the literature. We use Tarantula [26], Ochiai [2], DStar [43], Jaccard [3], Ample [8], Euclid [15] and Wong3 [44], which are widely used and accepted in the application of Statistical fault localization, in our ranking procedure. We discuss the definition and application in our method of these metrics in Section 3.1.
### Neural Network and Quantization
A neural network consists of an input layer, an output layer, and one or more intermediate layers called hidden layers. Each layer is a collection of nodes, called neurons. Each neuron is connected to other neurons by one or more directed edges [14].
Let \(f:\mathcal{I}\rightarrow\mathcal{O}\) be the neural network \(N\) with \(m\) layers. In this paper, we focus on a neural network for image classification. For a given input \(\mathrm{x}\in\mathcal{I},f(x)\in\mathcal{O}\) calculates the output of the DNN, which is the classification label of the input image. Specifically, we have
\[f(x)=f_{N}\left(\ldots f_{2}\left(f_{1}\left(x;W_{1},b_{1}\right);W_{2},b_{2} \right)\ldots;W_{N},b_{N}\right) \tag{1}\]
In this equation, \(W_{i}\) and \(b_{i}\) for \(i=1,2,\ldots,N\) represent the weights and bias of the model, which are trainable parameters. \(f_{i}\left(z_{i-1};W_{i-1},b_{i-1}\right)\) is the layer function that maps the output of layer \((i-1)\), i.e., \(z_{i-1}\), to the input layer \(i\).
QuantizationAs one of the general neural network model optimization methods, model quantization can reduce the size and model inference time of DNN models and their application to most models and different hardware devices. Quantization is the process of approximating continuous values of a signal to a finite number of discrete values. It can be understood as a method of information compression. When considering this concept in the context of computer systems, it is generally referred to as "low bits." In the following formula, \(r\) is the true
floating point value, \(q\) is the quantized fixed point value, \(Z\) is the quantized fixed point value corresponding to the 0 floating point value, and \(S\) is the smallest scale that can be represented after quantization of the fixed point. The formula for quantization from floating point to fixed point is as follows:
\[\begin{array}{c}r=S(q-Z)\\ q=\mathrm{round}\left(\frac{r}{S}+Z\right)\end{array} \tag{2}\]
Currently, Google's TensorFlow Lite [9] and NVIDIA's TensorRT [41] support the INT8 engine framework.
## 3 QNNRepair Methodology
The overall workflow of QNNRepair is illustrated in Figure 1. It takes two neural networks, a floating-point model and its quantized version for repair, as inputs. There is also a repair dataset of successful (passing) and failing tests, signifying whether the two models would produce the same classification outcome when given the same test input.
Figure 1: The QNNRepair Architecture.
The passing/failing tests are used by QNNRepair to evaluate each neuron's importance and localize these neurons to repair for improving the quantized model's performance (Section 3.1). In QNNRepair, the neural network repair problem is encoded into a linear programming problem for solving the corrected neuron weights (Section 3.2). It then replaces the weights with corrected weights, after which QNNRepair evaluates the performance similarity between the floating model and the quantized model on an independent validation set (Section 3.3). If the quantized model's performance is good enough w.r.t. the floating point one after repair, the model is ready for deployment. Otherwise, QNNRepair continues by selecting other parameters to repair. More detailed information is presented in Algorithm 1 (Section 3.4).
### Ranking the Importance of the Neurons
QNNRepair starts with evaluating the importance of the neurons in the neural network for causing the output difference between the quantized model and the floating point one. When conducting an inference procedure on an image, the intermediate layer in the model has a series of outputs as the inputs for the next layer. The outputs go through activation functions, and we assume it is a ReLU function. For the output, if it is positive, we place it as one. If not, we place it as zero, naming it the activation output. Let \(f_{i}\) and \(q_{i}\) represent the activation output of a single neuron in full-precision and quantized models separately. If there is a testing image that makes \((f_{i},q_{i})\) not equal, we consider the neuron as "activated", and we set \(v_{mn}=1\), otherwise \(v_{mn}=0\). Then we define the activation function matrix to assemble the activation status of all neurons for the floating-point model:
\(\begin{pmatrix}f_{11}&\cdots&f_{1n}\\ \vdots&\ddots&\vdots\\ f_{m1}&\cdots&f_{mn}\end{pmatrix}=f_{i}\) and \(q_{i}\) for the quantized model.
We define the activation differential matrix to evaluate the activation difference between the floating point and the quantized model. Given an input image \(i\), we calculate \(\operatorname{diff}_{i}=f_{i}-q_{i}\) between the two models. We form a large matrix of these \(diff_{i}\) regarding the image \(i\). The element in this matrix should be 0 or 1, representing whether the floating and quantized neural networks' activation status is the same.
We borrow the concepts from traditional software engineering, just replacing the statements in traditional software with neurons in neural network models. We define the passing tests as the images in the repair set that the floating-point and quantized model have the same classification output, and failing tests as their classification results are different. For a set of repair images, we define \(<C_{n}^{\text{af}},C_{n}^{\text{nf}},C_{n}^{\text{as}},C_{n}^{\text{ns}}>\) as following:
* \(C_{n}^{\text{af}}\) is the number of "activated" neurons for failing tests.
* \(C_{n}^{\text{nf}}\) is the number of "not activated" neurons for failing tests.
* \(C_{n}^{\text{as}}\) is the number of "activated" neurons for passing tests.
* \(C_{n}^{\rm ns}\) is the number of "not activated" neurons for passing tests.
We borrow the concepts from traditional software fault localization: Tarantula [26], Ochiai [2], DStar [43], Jaccard [3], Ample [8], Euclid [15] and Wong3 [44] and defined the indicators of neuronal suspicion in Table 1. Note that in DStar, * represents the \(n\) square of \(C_{n}^{af}\).
We then rank the quantitative metrics of these neurons from largest to smallest based on certain weights, with higher metrics indicating more suspicious neurons and the ones we needed to target for repair.
### Constraints-solving based Repairing
After the neuron importance evaluation, for each layer, we obtain a vector of neuron importance. We rank this importance vector. The neuron with the highest importance is our target for repair and as it could have the greatest impact on the corrected error outcome.
The optimization problem for a single neuron can be described as follows:
\[\begin{split}\text{Minimize}:&\ M\\ \text{Subject to}:&\ M\geq 0;(\delta_{1},\delta_{2}, \ldots,\delta_{n})\in[-M,M]\\ &\forall x_{i}\ \text{in TestSet}\ X\text{:}\ \sum_{i=1}^{m}w_{i}x_{i}<0\ \ \text{and}\ \ \sum_{i=1}^{m}(w_{i}+\delta_{i})x_{i}>0\end{split} \tag{3}\]
In the formula, \(m\) represents the number of neurons connected to the previous layer of the selected neuron, and we number them from \(1\) to \(m\). We add incremental \(\delta\) to the weights to indicate the weights that need to be modified, all the way to \(m\). We assume that in the full-precision neural network, this neuron's activation function gives the result \(1\), and the quantized gives \(0\). The corrected neuron in the quantized model result needs to be greater than \(0\) for the output of the activation function to be \(1\). In this case, we make the distance of the repaired quantized neural network as close as possible to that of the original quantized neural network.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Tarantula: & \(\frac{C_{n}^{af}/(C_{n}^{af}+C_{n}^{\rm nf})}{C_{n}^{\rm af}/(C_{n}^{af}+C_{n}^ {\rm nf})+C_{n}^{\rm ns}/(C_{n}^{\rm ns}+C_{n}^{\rm ns})}\) & Euclid: & \(\sqrt{C_{n}^{\rm af}+C_{n}^{\rm ns}}\) \\ Ochiai: & \(\frac{C_{n}^{af}}{\sqrt{(C_{n}^{af}+C_{n}^{\rm ns})}(C_{n}^{af}+C_{n}^{\rm nf })}\) & DStar: & \(\frac{C_{n}^{af}}{C_{n}^{\rm ns}+C_{n}^{\rm nf}}\) \\ Ample: & \(\left|\frac{C_{n}^{af}}{C_{n}^{af}+C_{n}^{\rm nf}}-\frac{C_{n}^{\rm ns}}{C_{n} ^{\rm ns}+C_{n}^{\rm ns}}\right|\) & Jaccard: & \(\frac{C_{n}^{af}}{C_{n}^{af}+C_{n}^{\rm nf}+C_{n}^{\rm ns}}\) \\ Wong3: & \(C_{n}^{\rm af}-h\ \ h=\left\{\begin{array}{ll}C_{n}^{as}&\ \
The inputs to our algorithm are a quantized neural network \(Q\) that needs to be repaired, a set of data sets \(X\) for testing, and the full-precision neuron network model \(F\) to be repaired. We use Gurobi [32] as the constraint solver to solve the constraint and then replace the original weights with the result obtained as the new weights. After the repair is complete, we re-verify the fidelity accuracy to confirm that our requirements are met. If so, we consider the repaired network to be ready to use. If not, we select the second most important neuron and perform the repair process again, until the accuracy satisfies our requirements.
### Evaluating the performance similarity between the two models
The goal of quantized neural network repair is to improve the model's accuracy to be closer (and even better) than its floating-point version. Hence, the fidelity accuracy [48] is chosen to quantify the performance similarity between two neural network models. Let \(X\) denote the test set, \(f_{F}(x)\) denotes the classification output of the floating-point model, and \(f_{Q}(x)\) - the classification output of the quantized model. The fidelity accuracy can be defined as the following equation:
\[\text{Fidelity }=1-\text{Prob}\left\{x\in X\mid f_{F}(x)\neq f_{Q}(x)\right\} \tag{4}\]
In the equation, \(Prob\) indicates the probability that a floating-point model and a quantized model produce different outputs for the same input image.
### QNNRepair Algorithm
Our repair method is formulated in Algorithm 1. The input to our algorithm is the full-precision model \(F\), the quantized model \(Q\), the repair set \(X\), the validation set \(V\), and the number of neurons that need to be repaired \(N\) (Line 1). Firstly, we initialize arrays to store the activation states of the floating and quantized model, the values of the neuron importance, and four arrays \(C^{as}[]\), \(C^{af}[]\), \(C^{ns}[]\) and \(C^{nf}[]\) mentioned in Section 3.1 (Line 1). For these six arrays, we set all elements to 0.
Next, in lines 3-4, for each input in the test set \(x\in\mathcal{X}_{n}\), we perform the inference process once and obtain the neurons' activation states in the corresponding model layers, and store them in the activation states of the floating and quantized model. In line 5, if \(x[i]\) is a failing test, then we add the difference of activation status between the float model and quantized model to \(C^{as}[i]\), and vice versa. In line 11 and 12, we calculate \(C^{ns}[]\) and \(C^{nf}[]\) according to the definition in Section 3.1. We calculate the importance (here we use DStar as an example) for each neuron regarding seven importance metrics and sort them in descending order then store them in set \(I_{n}[]\) in line 14.
Then, we pick the neuron in \(I_{n}[]\), according to the neuron's weights and the corresponding inputs from the previous layer, we create and solve the LP problem we discussed in Section 3.2, get the correction of each neuron, and update their weights. When it arrives at the maxim number of neurons to repair, the loop
breaks, and we have corrected all the neurons. These are implemented at lines 17-24 in Algorithm 1.
```
Input: Floating-point model \(F\), Quantized model \(Q\), Repair set \(X\), Validation set \(V\), Number of neurons to be repaired \(N\) Output: Repaired model \(Q^{\prime}\), Repaired model's accuracy \(Acc\)
1 Initialize \(F_{a}[][],Q_{a}[][],I_{n}[]\), \(C^{us}[],C^{af}[]\), \(C^{us}[],C^{nf}[]\)
2foreach\(X\)do
3\(F_{a}[][i]=\operatorname{getActStatus}(F,x_{i})\)
4\(Q_{a}[][i]=\operatorname{getActStatus}(Q,x_{i})\)
5if\(x[i]\) is a failing testthen
6\(C^{nf}[i]=C^{nf}[i]+|F_{a}[][i]-Q_{a}[][i]|\)
7else
8\(C^{us}[i]=C^{us}[i]+|F_{a}[][i]-Q_{a}[][i]|\)
9
10 end if
11
12 end for
13\(C^{nf}[]=C^{nf}[]-C^{nf}[]\)
14\(C^{ns}[]=C^{ns}[]-C^{ns}[]\)
15\(I_{n}[]=\operatorname{DStar}(C^{ns}_{n}[],C^{ns}_{n}[],C^{ns}_{n}[])\)
16\(I_{n}[]=\operatorname{sort}(I_{n}[])\)// In descending order
17 Initialize weight of neurons \(w[][]\) and the increment \(\delta[][]\)
18foreach\(neuron[i]\in I_{n}[]\)do
19foreach\(edge[][i]\in neuron[i]\)do
20\(w[j][i]=\operatorname{getWeight}(edge[j][i])\)
21 end if
22\(\delta[][i]=\operatorname{solve}(X,w[][i])\)// Solve LP problem 3
23foreach\(edge[j][i]\in neuron[i]\)do
24\(edge[j][i]=\operatorname{setWeight}(w[j][i]+\delta[j][i])\)
25\(Q^{\prime}=\operatorname{update}(Q,edge[j][i])\)
26
27 end if
28if\(i>=N\)then
29 break
30 end if
31
32\(Acc=\operatorname{calculateAcc}(Q^{\prime},V)\)
33\(FidelityAcc=\operatorname{calculateFidelityAcc}(Q,Q^{\prime})\)
34return\(Q^{\prime},FidelityAcc\)
```
**Algorithm 1**Repair algorithm
Finally, we evaluate the fidelity accuracy of the corrected model. If it satisfies our requirements, then the model is repaired. Otherwise, try other combinations of parameters like important metrics or the maxim number of neurons need to repair and repeat the LP solving and correction process. The output for this algorithm is the repaired model with updated weight.
## 4 Experiment
### Experimental Setup
We conduct experiments on a machine with Ubuntu 18.04.6 LTS OS Intel(R) Xeon(R) Gold 5217 CPU @ 3.00GHz and two Nvidia Quadro RTX 6000 GPUs. The experiments are run with TensorFlow2 + nVidia CUDA platform. We apply QNNRepair to repair a benchmark of five quantized neural network models, including MobileNetV2 [33] on ImageNet datasets [11], and ResNet-18 [22], VGGNet [35] and two simple convolutional models trained on CIFAR-10 dataset [29]. The details of these models are given in Table 2.
We obtain the full-precision MobileNetV2 directly from Keras library, whereas we trained the VGGNet and ResNet-18 models on the CIFAR-10 dataset. We also defined and trained two smaller convolutional neural networks on CIFAR-10 for comparison: Conv3, which contains three convolutional layers, and Conv5, which contains five convolutional layers. Both models have two dense layers at the end. The quantized models are generated by using TensorFlow Lite (TFlite) [1], from the floating point models. In TFLite, the quantized convolution operation is optimized for performance, and the calculations are done in the fixed-point arithmetic domain to avoid the overhead of de-quantizing and re-quantizing tensors.
For repairs of the quantized model's performance, we use a subset of ImageNet called ImageNet-mini [25], which contains 38,668 images in 1,000 classes. The dataset is divided into the repair set and the validation set. The repair set contains 34,745 images, and the validation set contains 3,923 images. The CIFAR-10 dataset contains 60,000 images in 10 classes in total. 50,000 of them are training image, and 10,000 of them are test set. We use 1,000 images as the repair set. We use the repair set to identify suspicious neurons, generate LP constraints, and apply corrections to the identified neurons, and use the validation set to evaluate the accuracy of the models. We repeat the same experiment ten times for random neuron selection and get the average to eliminate randomness in repair methods.
### Repair Results on Baselines
In this part, we apply QNNRepair to these baseline quantized models, except for MobilenetV2, in Table 2. For each model, we perform a layer-by-layer repair of its last dense layers. We name these dense layers as dense-3 (the third last layer), dense-2 (the second last layer), and dense-1 (the output layer).
The QNNRepair result are reported in Table 3. We ranked the neurons using important metrics and chose the best results among the seven metrics. We also run randomly picked repairing as a comparison. We have chosen Top-1, Top-5, Top-10, Top-100, and all neurons as the rep
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline & & & & \multicolumn{2}{c}{Accuracy} \\ \cline{3-6} Model & Dataset & \#Layers & \#Params & \multicolumn{1}{c}{floating point} & \multicolumn{1}{c}{quantized} \\ \hline Conv3 & CIFAR-10 & 6 & 1.0M & 66.48\% & 66.20\% \\ Conv5 & CIFAR-10 & 12 & 2.6M & 72.90\% & 72.64\% \\ VGGNet & CIFAR-10 & 45 & 9.0M & 78.67\% & 78.57\% \\ ResNet-18 & CIFAR-10 & 69 & 11.2M & 79.32\% & 79.16\% \\ MobileNetV2 & ImageNet & 156 & 3.5M & 71.80\% & 65.86\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The baseline models. Parameters include the trainable and non-trainable parameters in the models; the unit is million (M). The two accuracy values are for the original floating point model and its quantized version, respectively.
models, the repair improves the accuracy of the quantized network, and in some cases, even higher than the accuracy of the floating-point model.
The dense-2 layer only contains 64 neurons in the Conv3 model. Hence we selected 30 neurons as the repair targets. In the dense-1 layer of Conv3, the effect of repairing individual neurons is not ideal, but as the number of repaired neurons gradually increases, the more correct information the Conv3 quantization model obtains from the floating-point model, so the accuracy gradually improves until it reaches 66.46% (which does not exceed the accuracy of the floating-point Conv3 neural network, but it gets very close to it: 66.48%, see Table 2 ). This is because all the repair information in the last layer comes from the original floating-point neural network. Note that because of the simple structure of the Conv3 neural network, the floating-point version of Conv3 itself is inaccurate, and the quantized and repaired neural network does not exceed the accuracy. In the dense-2 layer of Conv5, applying importance metrics to repair this layer is slightly better than random selection, only 0.01% regarding randomly selecting 5 neurons compared with using fault localization to select Top-5 neurons. Compared to the quantized model before repair, whose accuracy is 72.64%, the repairing only gets an accuracy of 72.56%, which does not improve the model's accuracy. In the dense-1 layer of Conv5, the best result is using fault localization to pick the Top-1 neuron and repair, at 72.58% accuracy, and this is not better than the quantized model before repair.
For VGGNet and ResNet-18 neural networks, the dense-1 layer is a good comparison. Both VGGNet and ResNet-18 have relatively complex network structures, and the accuracy of the original floating-point model is close to 80%. In the dense-1 layer of ResNet-18, only some of the neurons were repaired with
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Random} & \multicolumn{4}{c}{Fault Localization} & - \\ \cline{2-10} \#Neurons repaired & 1 & 5 & 10 & 100 & 1 & 5 & 10 & 100 & All \\ \hline Conv3\_dense-2 & 63.43\% & 64.74\% & 38.90\% & n/a & 66.26\% & **66.36**\% & 62.35\% & n/a & 57.00\% \\ Conv3\_dense-1 & 65.23\% & 66.31\% & - & n/a & 66.10\% & 66.39\% & - & n/a & **66.46**\% \\ Conv5\_dense-2 & 72.49\% & 72.55\% & 72.52\% & 72.52\% & **72.56**\% & 72.56\% & 72.56\% & 72.56\% & 72.54\% \\ Conv5\_dense-1 & 72.51\% & 72.52\% & - & n/a & **72.58**\% & 72.56\% & - & n/a & 72.56\% \\ VGGNet\_dense-3 & 78.13\% & 78.44\% & 78.20\% & 78.38\% & **78.83**\% & 78.82\% & 78.78\% & 78.66\% & 78.60\% \\ VGGNet\_dense-2 & 78.36\% & 78.59\% & 78.44\% & 78.22\% & 78.55\% & **78.83**\% & 78.83\% & 78.83\% & 78.83\% \\ VGGNet\_dense-1 & 78.94\% & 67.75\% & - & n/a & **79.29**\% & 69.04\% & - & n/a & 74.49\% \\ ResNet\_dense\_1 & 78.90\% & 78.92\% & - & n/a & 79.08\% & **79.20**\% & - & n/a & 78.17\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: QNNRepair results on CIFAR-10 models. The best repair outcome for each model, w.r.t. the dense layer in that row, is in **bold**. We further highlight the best result in _blue_ if the repair result is even better than the floating point model and in red if the repair result is worse than the original quantized model. Random means that we randomly select neurons at the corresponding dense layer for the repair, whereas Fault Localization refers to the selection of neurons based on important metrics in QNNRepair. In All cases, all neurons in that layer are used for repair. ’n/a’ happens when the number of neurons in the repair is less than 100, and ’-’ is for repairing the last dense layer of 10 neurons, and the result is the same as the All case.
accuracy close to their original quantized version, but all of them did not exceed the exact value of the floating-point neural network after the repair. However, unlike ResNet-18, correcting a single neuron randomly in the dense-1 layer of VGGNet make it more accurate than the quantized version of VGGNet. Using the importance metric and correcting a single neuron make the accuracy even higher than the floating-point version of VGGNet. However, repairing dense-1 of VGGNet was unsatisfactory, especially when 5 neurons are selected for repair; it suffered a significant loss of accuracy, even below 70%, which was regained if all ten neurons in the last layer were repaired. In the dense-2 layer of VGGNet, the overall accuracy is higher than 78%. When the importance metric is applied, the accuracy reaches 78.83%, noting that this accuracy is also achieved if all neurons in this layer are repaired. For the dense-3 layer of VGGNet, repairing 5 or 10 neurons using importance metrics will achieve the highest accuracy at 78.83%, the same as repairing the dense-2 layer.
ImageNetWe also conducted repair on the last layer for MobileNetV2 trained on the ImageNet dataset of high-resolution images. Using Euclid as the importance metric and picking 10 neurons as the correct targets achieve the best results, at 70.77%, improving the accuracy the quantized model.
### Comparison with Data-free Quantization
We tested SQuant [20], a fast and accurate data-free quantization framework for convolutional neural networks, employing the constrained absolute sum of error (CASE) of weights as the rounding metric. We tested SQuant two quantized models, the same as our approach: MobileNetV2 trained on ImageNet and ResNet-18 on CIFAR-10. We made some modifications to the original code to support MobileNetV2, which is not reported in their experiments.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Random} & \multicolumn{2}{c}{Fault Localization} & - \\ \cline{2-5} \#Neurons repaired & 10 & 100 & 10 & 100 & All \\ \hline MobileNetV2.dense-1 & 70.75\% & 70.46\% & **70.77**\% & 70.00\% & 68.98\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: QNNRepair results on ImageNet model.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{MobileNetV2} & \multicolumn{2}{c}{ResNet-18} \\ \cline{2-5} & Accuracy & Time & Accuracy & Time \\ \hline SQuant [20] & 46.09\% & 1635.37ms & 70.70\% & 708.16ms \\ QNNRepair & **70.77\%** & \(\sim\)15h & **79.20\%** & \(\sim\)9h \\ \hline \hline \end{tabular}
\end{table}
Table 5: QNNRepair vs SQuant
In contrast, to complete data-free quantization, our constraint solver-based quantization does not require a complete dataset and but only some input images for repair. Despite taking much more time than SQuant because it uses Gurobi and a constrained solution approach, MobileNetV2 - a complex model trained on ImageNet - QNNRepair achieves much higher accuracy.
### Repair Efficiency
The constraints-solving part contributes to the major computation cost in QNNRepair. For other operations, such as importance evaluation, modification of weights, model formatting, etc., it takes only a few minutes to complete. Thereby, Table 6 measures the runtime cost when using the Gurobi to solve the values of the new weights for a neuron for our experiments on the VGGNet model. It is shown in Table 6 that 75% of the solutions were completed within 5 minutes, and less than 9% of the neurons could not be solved, resulting in a total solution time of 9 hours for a layer of 512 neurons.
### Comparison Between Fault Localization Metrics in QNNRepair
We let the model and the layer stay the same. We use MobileNetV2 and the last layer as our target. We compare seven representative important metrics mentioned in Section 4.5. In these experiments, we used 1000, 500, 100, and 10 jpeg images as the repair sets to assess the performance of different importance assessment methods.
Firstly we rank the neurons in the last layer using seven different representative important metrics, which are Tarantula [26], Ochiai [2], DStar [43], Jaccard [3], Ample [8], Euclid [15] and Wong3 [44]. As shown in Figure 2, for the last fully connected layer of MobileNetV2, the important neurons are mainly concentrated at the two ends, those neurons with the first and last numbers. The evaluation metrics results are relatively similar for different neurons.
We selected the 100 neurons (for Conv3 it is 30 neurons) with the highest importance and could be solved by linear programming solvers according to different importance measures. The deltas are obtained according to Equation 3, and we apply the deltas to the quantized model. After that, we use the validation sets from ImageNet, which contains 50,000 jpeg images, to test the MobileNetV2 model. We also use the validation sets from CIFAR-10, which contains 10000 png
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Duration & \(<\)=5mins & 5-10mins & 10-30mins & 30mins-1h & No solution \\ \hline Percentage & 75\% & 8.98\% & 5.27\% & 1.76\% & 8.98\% \\ \hline \hline \end{tabular}
\end{table}
Table 6: The Gurobi solving time for constraints of each neuron in the dense-2 layer of the VGGNet model. There are 512 neurons in total.
image files, to test VGGNet and Conv5 after the repair. As a comparison, we also randomly picked 100 neurons to apply to repair and tested their accuracy. We give the results of the top 100 important neurons after selection and repair, as shown in Table 7.
We pick Tarantula, the best result of using ten images to repair the MobileNetV2 last layer, and plot the scatter plots based on the importance distribution of the different neurons. We rank the importance of those neurons and draw line plots as illustrated in Figure 2.
The figures give scatter plots of neuron importance and ranked line plots for the last dense layer of MobileNetV2. The horizontal coordinates of these plots are the serial numbers of the neurons. For the last layer in the MobileNetV2 model, few neurons have the highest importance. More than 300 neurons had an importance measurement of 0, and another large proportion had an importance of 0.5 or less. Based on the ranking of the importance of neurons, all the evaluation metrics except Tarantula and Euclid considered 108, 984, 612, 972 to be the four most important neurons in this layer, and among the 5th-10th most important neurons, 550, 974, 816, 795 and 702, just in a different order. This is reflected in the importance distribution graphs as spikes at the ends and as spikes at the ends of the graphs. Hence Ochiai, Dstar, Jaccard, Ample, and Wong3 have similar performance regarding the accuracy evaluation, and Euclid and Tarantula achieve better accuracy on ImageNet validation sets. Table 7 shows that the Euclid importance assessment method is highly effective, achieving relatively good results from restoration with 500 images to restoration with ten images and achieving only weaker accuracy than the Tarantula method in a restoration scenario with 1000 images. Also, a random selection of neurons can achieve good restoration results, especially when we select 100 images as restoration images. Only the random selection method has a validation accuracy higher than 70%.
For VGGNet model, for the same reason as MobileNetV2, the Tarantula, Ochiai, DStar, Jaccard, Euclid, and Wong3 give the same results when selecting 100 top important neurons to repair. As a comparison, the accuracy of random selection in dense-2 layer and dense-3 has a slight drop, at 78.38% and 78.22%. For Conv3 model, the seven importance metrics give the same results, and randomly select 30 neurons suffered a great accuracy loss, at 32.42%. But compared
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline Model+Repair Layer & \#Images & Tarantula & Ochiai & DStar & Jaccard & Ample & Euclid & Wong3 & Random \\ \hline MobileNetV2\_dense-1 & 1000 & 70.61\% & 69.76\% & 69.73\% & 69.73\% & 69.72\% & **70.70\%** & 69.73\% & 69.56\% \\ & 500 & 68.99\% & 69.01\% & 69.05\% & 69.05\% & 68.99\% & **69.46\%** & 69.06\% & 69.00\% \\ & 100 & 69.50\% & 69.42\% & 69.46\% & 69.46\% & 69.53\% & **69.98\%** & 69.46\% & 70.12\% \\ & 10 & **70.62**\% & 70.15\% & 70.12\% & 70.12\% & 70.17\% & 70.73\% & 70.12\% & 70.18\% \\ \hline VGGNet\_dense-3 & 1000 & 78.64\% & 78.64\% & 78.64\% & 78.64\% & 78.65\% & **78.66\%** & **78.66**\% & 78.22\% \\ VGGNet\_dense-2 & 1000 & 78.83\% & 78.83\% & 78.83\% & 78.83\% & 78.83\% & 78.83\% & 78.83\% & **78.38**\% \\ Conv3\_dense & 1000 & **59.50**\% & **59.50**\% & **59.50**\% & **59.50**\% & 59.27\% & 59.27\% & 59.27\% & 32.42\% \\ \hline \hline \end{tabular}
\end{table}
Table 7: The results regarding importance metrics, including 7 fault localization metrics and 1 random baseline. The number of images indicates how many inputs are in the repair set.
to the results in Table 3, repairing 30 top neurons also suffered accuracy drops. For the dense layer of conv3, the best repair is still to select one neuron for repair based on Tarantula sorting at 66.10%, and if random selection is taken into account, then selecting five neurons for repair would give the best result at 64.74%.
We also conducted a side-by-side comparison of the number of images required for the repair on MobileNetV2. It shows that the best results are obtained using 1000 images for repair and 10 images for the repair, but given the amount of time required to generate constraints for the repair using 1000 images and to solve the constraints using Gurobi, we recommend using a smaller set of repair images for the model.
Euclid demonstrates that it has the highest accuracy most of the time, and repairing with importance evaluation is more accurate than repairing randomly selected neurons.
### Limitations
For larger models, it takes a longer time to solve the linear programming problem. Also, selecting more repairing images for correction will have a greater likelihood of Gurobi being unable to solve the linear programming problem, reflected in the limitation of improving accuracy.
## 5 Related Work
### Neural Network Verification
The first applicable methods supporting the non-linear activation function for neural network verification can be traced back to 2017, R Ehlers et al. [13] proposed the first practicable neural network verification method based on SAT
Figure 2: Importance distribution regarding certain importance metrics on MobileNetV2.
solver (solve the Boolean satisfiability problem) [12]. They present an approach to verify neural networks with piece-wise linear activation functions. Guy Katz et al. [28] presented Marabou, an SMT(Satisfiability modulo theories) [10]-based tool that can answer queries about a network's properties by transforming these queries into constraint satisfaction problems. However, implementing SMT-based neural network verification tools is limited due to the search space and the scale of a large neural network model, which usually contains millions of parameters [16]. The SMT-based neural network verification has also been proved as NP-complete problem [27]. Shiqi Wang et al. develop \(\beta\)-CROWN [42], a new bound propagation-based method that can fully encode neuron splits via optimizable parameters \(\beta\) constructed from either primal or dual space. Their algorithm is empowered by the \(\alpha\),\(\beta\)-CROWN (alpha-beta-CROWN) verifier, the winning tool in VNN-COMP 2021 [5]. There are also some quantized neural network verification methods. TA Henzinger et al. [23] proposed a scalable quantized neural network verification method based on abstract interpretation. But due to the search-space explosion and it has been proved that SMT-based quantized neural network verification is a PSPACE-hard problem [23].
### Neural Network Repair
Many researchers have proposed their full-precision neural network repairing techniques. These can be divided into three categories: Retraining, direct weight modification, and attaching repairing units.
In the first category of repair methods, the idea is to retrain or fine-tune the model for the corrected output with the identified misclassified input. DeepRepair [45] implements transfer-based data augmentation to enlarge the training dataset before fine-tuning the models. The second category use solvers to get the corrected weights and modify the weight in the trained model directly. These types of methods, including [18] and [40], used SMT solvers for solving the weight modification needed at the output layer for the neural network to meet specific requirements without any retraining. The third category of methods repairs the models by introducing more weight parameters or repair units to facilitate more efficient repair. PRDNN [39] introduces a new DNN architecture that enables efficient and effective repair, while DeepCorrect [6] corrects the worst distortion-affected filter activations by appending correction units. AIRepair [38] aims to integrate multiple existing repair techniques into the same platform. However, these methods only support the full-precision models and cannot apply to quantized models.
### Quantized Aware Training
Some researchers use quantized-aware training to improve the performance of the quantized models. Yuhang Li et al. proposed a post-training quantization framework by analyzing the second-order error called BRECQ(Block Reconstruction Quantization) [30]. Ruihao Gong et al. proposed Differentiable Soft Quantization (DSQ) [19] to bridge the gap between the full-precision and low-bit
networks. It can automatically evolve during training to gradually approximate the standard quantization. J Choi et al. [7]proposed a novel quantization scheme PACT(PArameterized Clipping acTivation) for activations during training - that enables neural networks to work well with ultra-low precision weights and activations without any significant accuracy degradation. However, these methods require retraining and the whole dataset, which will consume lots of computing power and time to improve tiny accuracy in actual practices. In addition, there is a method called Data-free quantization, which quantizes the neural network model without any datasets. Cong Guo et al. proposed SQuant [20], which can quantize networks on inference-only devices with low computation and memory requirements.
## 6 Conclusion
In this paper, we presented QNNRepair, a novel method for repairing quantized neural networks. Our method is inspired by traditional software statistical fault localization. We evaluated the importance of the neural network models and used Gurobi to get the correction for these neurons. According to the experiment results, after correcting the model, accuracy increased compared with the quantized model. We also compared our method with state-of-the-art techniques; the experiment results show that our method can achieve much higher accuracy when repair models are trained on large datasets.
## Acknowledgements
This work is funded by the EPSRC grant EP/T026995/1 _"EnnCore: End-to-End Conceptual Guarding of Neural Architectures"_. The Dame Kathleen Ollerenshaw Fellowship of The University of Manchester supports M. A. Mustafa.
|
2302.11467 | Power Constrained Autotuning using Graph Neural Networks | Recent advances in multi and many-core processors have led to significant
improvements in the performance of scientific computing applications. However,
the addition of a large number of complex cores have also increased the overall
power consumption, and power has become a first-order design constraint in
modern processors. While we can limit power consumption by simply applying
software-based power constraints, applying them blindly will lead to
non-trivial performance degradation. To address the challenge of improving the
performance, power, and energy efficiency of scientific applications on modern
multi-core processors, we propose a novel Graph Neural Network based
auto-tuning approach that (i) optimizes runtime performance at pre-defined
power constraints, and (ii) simultaneously optimizes for runtime performance
and energy efficiency by minimizing the energy-delay product. The key idea
behind this approach lies in modeling parallel code regions as flow-aware code
graphs to capture both semantic and structural code features. We demonstrate
the efficacy of our approach by conducting an extensive evaluation on $30$
benchmarks and proxy-/mini-applications with $68$ OpenMP code regions. Our
approach identifies OpenMP configurations at different power constraints that
yield a geometric mean performance improvement of more than $25\%$ and $13\%$
over the default OpenMP configuration on a 32-core Skylake and a $16$-core
Haswell processor respectively. In addition, when we optimize for the
energy-delay product, the OpenMP configurations selected by our auto-tuner
demonstrate both performance improvement of $21\%$ and $11\%$ and energy
reduction of $29\%$ and $18\%$ over the default OpenMP configuration at Thermal
Design Power for the same Skylake and Haswell processors, respectively. | Akash Dutta, Jee Choi, Ali Jannesari | 2023-02-22T16:06:00Z | http://arxiv.org/abs/2302.11467v1 | # Power Constrained Autotuning using Graph Neural Networks
###### Abstract
Recent advances in multi and many-core processors have led to significant improvements in the performance of scientific computing applications. However, the addition of a large number of complex cores have also increased the overall power consumption, and power has become a first-order design constraint in modern processors. While we can limit power consumption by simply applying power constraints, applying them blindly will lead to non-trivial performance degradation. To address the challenge of improving the performance, power, and energy efficiency of scientific applications on modern multi-core processors, we propose a novel Graph Neural Network based auto-tuning approach that (i) optimizes runtime performance at pre-defined power constraints, and (ii) simultaneously optimizes for runtime performance and energy efficiency by minimizing the energy-delay product. The key idea behind this approach lies in modeling parallel code regions as flow-aware code graphs to capture both semantic and structural code features. We demonstrate the efficacy of our approach by conducting an extensive evaluation on \(30\) benchmarks and proxy-/mini-applications with \(68\) OpenMP code regions. Our approach identifies OpenMP configurations at different power constraints that yield a geometric mean performance improvement of more than \(25\%\) and \(13\%\) over the default OpenMP configuration on a 32-core Skyake and a \(16\)-core Haswell processors, respectively. In addition, when we optimize for the energy-delay product, our auto-tuner-selected OpenMP configurations demonstrate both performance improvement of 21% and 11% and energy reduction of \(29\%\) and \(18\%\) over the default OpenMP configuration at Thermal Design Power for the same Skyake and Haswell processors, respectively.
Auto-tuning, OpenMP, GNN, Power constraint
## I Introduction
High-performance computing (HPC) systems have exploded in both capacity and complexity over the past decade, and this has led to substantial improvement in performance of various scientific applications. However, more complex larger systems consume more power, and in the absence of expensive cooling solutions, increased power consumption leads to higher operational temperature and inefficient resource utilization (via higher static power, shorter device lifespan, and more). As a result, power consumption has become a first-order hardware design constraint for modern multi- and many-core systems. Unfortunately, focusing on hardware advancements for reducing power consumption is insufficient, as inefficient usage of the underlying hardware due to poor parallel coding practices may negate any hardware improvements.
Many software solutions currently exist for controlling power. At the processor level, vendor-provided tools can be used to artificially lower power consumption. For example, power consumption can be controlled in recent Intel processors using the Running Average Power Limit (RAPL) interface [1], which ensures that an application does not exceed a predefined power budget. However, a common drawback of a fixed power budget is that it _slows down_ execution by lowering the processor clock, and this can have adverse effects on real-time or time-bound applications. At the data-center level, a common approach to reducing power consumption is through over-provisioning (i.e., have more hardware available than can be powered simultaneously at any time) and constraining the power limit for each node [2]. In such a setting, a static algorithm for distributing power across nodes may lead to _degraded throughput_, and a more sophisticated approach that adjusts the execution dynamically is required to harness the full potential of the underlying system.
One strategy to address both scenarios is to adjust the execution of the application directly, such that they meet some user-specified (e.g., individuals or data-centers) performance and/or power constraints. This will allow users to tailor their application to domain-specific environments (e.g., edge or mobile computing) or design scheduling policies for data-center power management. OpenMP, as the de-facto parallel programming model for intra-node parallelism, provides a number of tunable parameters that highly influence code execution, which makes it highly suitable for this purpose. While there is already a large body of work targeting performance tuning, there are only a few studies that target power. In addition, due to the large configuration search space for OpenMP on modern multi- and many-core processors, most of these studies require multiple executions to determine the optimal configuration [3, 4, 5, 6, 7, 8], which is both time consuming and resource intensive.
As a motivating example, we consider the _ApplyAccelerationBoundaryConditionsForNodes_ kernel from the LULESH [9] proxy application. On a \(16\)-core dual-socket Haswell processor with a Thermal Design Power (TDP) of \(85\)W, an exhaustive search of the OpenMP configuration space yields the highest speedups of \(7.54\times\), \(2.11\times\), \(1.80\times\) and \(1.67\times\) over the typical (or default) OpenMP configuration at power constraints of \(40\)W, \(60\)W, \(70\)W and \(85\)W, respectively. However, _none
of these OpenMP configurations lead to the highest energy efficiency_. The most energy-efficient execution occurs at a power constraint of 60W using a OpenMP configuration that leads to a greenup (i.e., greenup = \(\frac{Energy_{\text{obj}}}{Energy_{\text{mem}}}\)[10]) of \(3.89\times\), but a speedup of \(0.95\times\) (i.e., a _slowdown_) over the typical OpenMP configuration at TDP (\(85\)W). This contradicts the commonly held belief of _race-to-halt_[11] (i.e., the idea that the lower energy consumption occurs at the highest speedup), and shows that optimizing for time and optimizing for energy may not yield the same OpenMP configuration. In addition, for applications where a slowdown is unacceptable, we can simultaneously optimize for time and energy by targeting the energy-delay product (EDP) metric [12]. Through an exhaustive search through the OpenMP configurations space, we observe that minimizing EDP yields a speedup of \(1.64\times\) and a greenup of \(2.7\times\), at a yet another OpenMP configuration and power constraint.
In summary, optimizing for performance, power, and energy consumption all require different strategies for identifying the optimal OpenMP configuration, and optimizing for one metric (e.g., performance) does not necessarily optimize for another (e.g., energy). To this end, we propose a graph neural network (GNN)-based technique that can be used to (i) identify OpenMP configurations at prescribed power constraints that maximizes performance and (ii) optimize for the _energy-delay product_ to identify configurations for both energy-efficient and performant execution.
In this study, OpenMP code regions are first transformed to a flow-aware graphical representation. These code graphs are then modeled by a GNN, and used for predicting the best configurations for the appropriate target. In contrast to prior studies, we use only these code graphs (i.e., static features) as inputs to our model, which does not require _expensive code execution_. The benefit of using a deep learning (DL)-based approach is that it automatically helps reduce the search space exploration by aggressively pruning non-beneficial points in the search space.
The works in [7, 8] studied the impacts of power constraints and OpenMP configurations on time and energy and are, to the best of our knowledge, most similar to the problem considered in this paper. To demonstrate the effectiveness of our static approach, we compare our results against a Bayesian Optimization based tuner BLISS[6], and a search-based tuner OpenTuner[4]. Through this study, we propose two separate approaches for tuning performance and energy/power. The first approach aims to identify the tuning configuration that can produce the fastest execution at a predefined power constraint. The second approach looks at both time and energy as target metrics and aims to optimize for both at the same time by identifying configurations that lead to the lowest _energy-delay product_. The key contributions of our work are as follows:
* We build an RGCN network to model flow-aware OpenMP code region graphs that captures both semantic and structural features of code regions, and is portable across different architectures.
* We build an auto-tuning framework that identifies OpenMP configurations yielding near optimal execution times at different power constraints. We achieve a geometric mean speedup of \(1.33\times\) and \(1.15\times\) over default OpenMP configurations at four power constraints across \(30\) applications on Skylake and Haswell systems.
* Our DL-based framework also optimizes for both time and energy simultaneously by minimizing the EDP. We achieve geometric mean speedup of \(1.27\times\) and \(1.12\times\), and greenup of \(1.40\times\) and \(1.22\times\) respectively on Skylake and Haswell, over default OpenMP configurations running at TDP (i.e., no power constraint).
* We compare our framework against the state-of-the-art BLISS[6] tuner and OpenTuner[4] and demonstrate better performance without the need for executing code.
## II Background
This section outlines concepts relevant to this work.
### _Static Code Representations for DL_
DL is being increasingly used for code analysis and optimization tasks [13]. However, the use of DL necessitates the use of a strong code representation capable of capturing the inherent features in source code. A lot of prior studies have represented programs as a sequence of lexical tokens [14]. But, these fail to capture the structured nature of programs. To overcome this, representations capturing syntactic as well as semantic features have been proposed [13, 15].
These methods often do not take into account control, data, or call flows in the program. PROGRAML [14] is a tool that represents the semantic and structural features of code in a flow-aware multi-graph. These DL-friendly multi-graphs have a vertex for each instruction and control-flow edges between them. Data flow is represented by separate vertices for variables and constants and associated data-flow edges to instructions. Call flow is represented by edges between callee functions and caller instruction vertices. We use this tool to transform code region IRs to their corresponding graphs.
### _Power Constraining and Energy Profiling_
Starting with the SandyBridge \(\mu\)architecture, Intel introduced the RAPL software tool that enables power/energy monitoring and power capping through a simple interface. The power to several subsystems of the processor, such as memory, DRAM, CPU, etc can be controlled via RAPL. We use the _Variorum_[16] tool, which in turn uses RAPL and device MSRs, to control the power constraint on the CPU. We also use PAPI (with the RAPL component enabled) [17] to measure performance counters and energy profiling data.
### _Graph Neural Networks_
Recent advances in deep learning have now enabled the application of DL on data generated from non-Euclidean space [18]. The relations and dependencies between objects in such data can more readily be represented as a graph. GNNs were proposed as a means of modeling such data. Graph
Convolutional Networks (GCNs) are a form of GNNs aimed at generalizing the common _sliding window_ convolution operation on grid data in regular Convolutional Neural Networks to graphs [18]. A GCN network updates its node representation by aggregating the features from the node's neighbors along with the node. Similar to CNNs, GCNs stack multiple convolutional layers to extract high-level node representation. We use Relational Graph Convolutional Network (RGCN), a variation of GCN, to model our program graphs. RGCNs were proposed to enable networks to better model large-scale relational data [19]. RGCNs differ from GCNs in that they work with relation specific transformations annotated by the type and direction of edges. RGCNs accumulate transformed feature vectors through a normalized sum.
## III The PnP Auto-Tuner: A GNN based Power and Performance Tuner
In this section, we outline our two-pronged approach to tuning performance and power. We consider two scenarios with real-world implications: i) Because of cost and energy considerations, clusters and data-centers must usually work under strict power budgets. However, constraining power directly impacts performance by limiting the power delivered to hardware components. Therefore, assuming no code changes or compiler optimizations, tuning available runtime parameters becomes essential for improving application performance. ii) It is of utmost importance in most HPC systems to reduce energy consumption. This has a direct monetary and environmental impact. However, as shown in Section I, simply optimizing for energy, can potentially lead to slower executions. Therefore, we must optimize for a metric that considers both energy and performance. To this end, we target the multi-objective metric _energy-delay product (EDP)_. We use GNNs to build a model that will be used for the aforementioned tasks. The inputs to the GNNs are code flow graphs of OpenMP regions. Using such graphs allows us to model the semantics and structure of source code. These convey relevant information to the model about the code region being tuned. We refer to these input code graphs as static features, as these are obtained statically without any code executions. An overview of this pipeline is shown in Figure 1, and outlined in the following paragraphs.
### _Representing the Code_
In this study, we aim to optimize OpenMP code regions. These code regions are usually the primary computational bottlenecks in such applications. Instead of focusing on individual loops inside these parallel regions, we aim to optimize the parallel region as a whole for larger performance improvements. Tuning sub-regions within an OpenMP code region adds additional overhead. Switching between configurations can improve the performance of each sub-region (loops for example), but can degrade the performance of each OpenMP region and the application as a whole. The benchmark applications are initially compiled to their intermediate representations (IR). Compiling OpenMP code to its corresponding IR automatically encloses the parallel region in an outlined function. We use the llvm-extract tool to extract the outlined parallel region. As shown in Figure 1, to represent the code regions in a form usable by DL models, we use PROGRAMML [14] to obtain the corresponding graph embeddings. These code graphs encapsulate the semantic and structural characteristics of code, as well as the data flow, control flow, and call flow in source code, as described in Section II-A.
### _Configuring the Search Space_
One of the primary motivations behind using a DL technique for this work was to develop a method that can work with large search spaces easily. Unlike most existing auto-tuners, which have to extensively execute programs to identify the best configurations, our DL-based framework will not need to execute programs. For the proposed DL approach to scale well to unseen code and inputs, it is necessary to feed the model with code graphs with enough variability. Along with variability in considered parallel code regions, it is essential to model the effect of various tuning parameters on these code regions. Different configurations impact code execution by affecting the load balancing and cache behavior, which in turn impacts performance.
As our goal is to target performance optimization and energy efficiency, we must simultaneously consider the impact of power constraints and OpenMP parameters on code executions. To this end, we have defined a search space (shown in Table I) with \(504\) valid configurations. In addition, the default OpenMP configurations for each of the four power limits have also been considered as valid configurations leading to a total of \(508\) configurations. The search space used in this study has been selected based on ideas presented by Bari et al. in [8].
### _Power Constraining and Dataset Creation_
In this work, we used the _Variorum_[16] tool for constraining power levels on each of the experimental systems. We used Variorum APIs to interface with RAPL and device MSRs to constrain power to the values described in Table I.
To validate our hypothesis, we chose to work with multiple OpenMP applications with varied complexity. These OpenMP regions consists of parallel regions with simple do-all loops to regions with multiple loops with varying levels of nesting and diverse programmatic constructs. We have worked with \(25\) applications from the PolyBench suite [20], and mini and proxy applications XSBench [21], RSBench [22], miniFE [23], miniAMR [24], Quicksilver [25], and LULESH [9] with combined total of 68 OpenMP regions.
\begin{table}
\begin{tabular}{l l} \hline
**Search Space** & **Parameter Values** \\ \hline Power Limits & 75W, 100W, 120W, 150W (Skylake) \\ & 40W, 60W, 70W, 85W (Haswell) \\ Number of threads & 1, 4, 8, 16, 32, 64 (Skylake) \\ & 1, 2, 4, 8, 16, 32 (Haswell) \\ Scheduling Policy & STATIC, DYNAMIC, GUIDED \\ Chunk Sizes & 1, 8, 32, 64, 128, 256, 512 \\ \hline \end{tabular}
\end{table} TABLE I: Search space for performance and power tuning on Skylake and Haswell nodes.
At each power level, parallel OpenMP regions in all considered applications were executed for each runtime configuration in Table I and default OpenMP configurations (all threads, static scheduling, and compiler defined chunk sizes) on each system. The execution times obtained as such are then analyzed to identify the best configuration for each code region. The best configurations are used as labels during training.
### _Performance and Power Modeling_
This section outlines our GNN-based approach towards performance and power optimizations. We propose two tuning scenarios with different objectives:
* In the first scenario, we aim to identify the OpenMP configuration that lead to the fastest executions at a given power constraint.
* In the second scenario, we aim to identify both the OpenMP configuration and the power level that minimizes the EDP. By minimizing the EDP, we hope to improve the execution time _and_ energy efficiency in comparison to default OpenMP configuration at TDP.
#### Iii-D1 Code Graph Modeling using GNNs
For both scenarios, the code modeling technique is similar. Modeling code graphs allows us to model code semantics and structure. Analyzing code structure allows us to better capture the interdependence between code blocks. Simply looking at code as a sequence of text does not afford this information. The code graphs generated in Section III-A are initially passed through a GNN network for modeling the code graphs. Specifically, Relational Graph Convolutional Networks (RGCNs) are used as these allow modeling relation specific features. Each code graph consists of three types of edges denoting the type of flow (Section III-A). The type of edges are used as edge features during modeling. For each node in a graph, the node features are the type of node, and the associated IR code block. Before modeling, the code region IRs are used to generate an embedding. This embedding maps IR text to tensors. These tensors are then passed to the model as node features along with the type of the node. Based on these features, the GNN layers model these by passing "messages" between neighboring nodes, aggregation, and subsequent weight updations [26]. The output tensors from the GNN layers then fed into fully connected neural network layers with the aim of identifying the best configurations.
#### Iii-D2 Power Constraint Specific Auto-tuning
As noted in Section I, one way of meeting power consumption goals is to enforce a specific power constraint. Such power constraints can help limit the power drawn by a node or its subsystems. However, simply using default OpenMP runtime configurations at different power constraints for code execution may lead to performance degradation, as well as increased energy usage from static power. Therefore, we aim to identify those configurations that lead to speedups at predefined power constraints. We propose a DL based technique for power-constrained auto-tuning. As outlined in Section III-D1, we use the flow-aware code graphs obtained from the parallel code region IRs as inputs to the RGCN layers of our network. As shown in Figure 1, the RGCN layers model each such graph and feeds the output into a fully connected (dense) network. The dense layers acts as a classifier and are trained as such with the target of predicting the best configuration for a given OpenMP code region.
#### Iii-D3 Optimizing Energy and Time
For nodes and systems without any predefined power constraint, time and energy optimization are still of primary importance. However, simply optimizing for performance _or_ energy neglects the other criteria. Thus, in this section, we propose using power constraints as a tuning parameter along with the available OpenMP runtime configurations for joint optimization of performance and power. Simply using execution time or energy savings for identifying such configurations is not enough. Thus, we use the _energy-delay product (EDP)_ metric [12] as a more accurate measure of the impact of different configurations on code performance. In this work, we assign equal importance to time and energy and use the metric \(E\ast T\), where \(E\) represents the energy consumption, and \(T\) represents the execution time for a parallel code region.
We again use the modeled code graphs from Section III-D1 as the static feature inputs to our model for this experiment and train our model with a target of optimizing the EDP. As in the previous subsection, a fully connected neural network serves as a classifier to identify the best configurations for tuning EDP. Using a DL-based approach for identifying the best one out of \(508\) possible configurations is especially beneficial, as such models are efficient at automatically pruning the under performing configurations. This is in stark contrast to brute-force approaches, where the tuning cost would explode with increasing search space complexity.
## IV Experiments
To identify near optimum values of tuning parameters for both our experimental scenarios, we first explore every permutation of inputs and configurations considered in this study. We use this exhaustive exploration as an oracle to compare the results from our work. We also compare our work against BLISS[6] and OpenTuner[4]. All results presented in the following paragraphs represent speedups/greenups of each
Fig. 1: PnP Tuner Pipeline: An overview of tasks in our GNN based power and performance tuner
code region. For applications with multiple OpenMP regions, the geometric mean of speedups/greenups of all regions in an application are reported. We have also verified that there are sequences of serial code in between successive OpenMP regions. This allows us to look at each region as a self-contained unit, and makes them good candidates for tuning. We assume that the performance of these intervening serial sequences will not change and improving the performance of each OpenMP region would translate to improvement in application performance.
### _Experimental Setup_
For our experiments, we use two systems; one with Intel(R) Xeon(R) Gold \(6142\) CPU with \(32\) cores, two hyper-threads per core, and two sockets (Skylake) with a minimum and TDP package power of \(75W\) and \(150W\), and an Intel(R) Xeon(R) E5-2630 v3 CPU (Haswell), with 16 cores, two hyper-threads per core, and two sockets, and minimum and TDP package power of \(40W\) and \(85W\). We use Clang tools for code compilation and transformation to IR, and PyTorch DL libraries for building our GNN models.
### _Power Constrained Auto-tuning_
In this section, we evaluate the performance of our tuner in determining the optimal configuration for minimizing execution time given a specific power constraint (described in Section III-D2). To validate the effectiveness of our approach, we use _leave-one-out cross-validation (LOOCV)_. For each fold, code regions from one benchmark application is selected and assigned to the validation set and the code regions from all other applications are assigned to the training set. We repeat this process for all applications in our approach. Such a process is essential to evaluate the performance of our model on previously unobserved code regions.
The results for the Haswell system are shown in Figure 2. For each application, we calculate the geometric mean speedups for all OpenMP regions in each application achieved by each tuner across four power constraints (i.e., \(40\)W, \(60\)W, \(70\)W, \(85\)W).
While training the model on the data from the Skylake system, we borrow ideas from transfer/inductive learning and perform an optimization step to speed up the training process. Because the code graphs are statically generated, the code
Fig. 2: Power Constrained Tuning (Haswell): Each chart shows results for a specific power constraint. Each bar-group shows geometric mean speedup for all OpenMP regions in an application over default OpenMP settings wrt the corresponding tuning approach. Speedups are normalized by oracle(brute-force) speedups. Normalized oracle speedups are always \(1.0\times\). The PnP tuner outperforms BLISS in \(82.5\%\) and OpenTuner in \(78\%\) cases across all power constraints(see Section IV-B for details).
graphs obtained on different systems using the same compiler are identical. For this reason, we save the weights and model states of the GNN model obtained while training our model on the Haswell system. While training the model on the Skylake data, we load the saved weights and model and only re-train the dense layers. This leads to \(4.18\times\) faster training (or reduces training time by \(76\%\)).
Results for each power constraint (\(75\)W, \(100\)W, \(120\)W, \(150\)W) is shown in Figure 3 for the Skylake system. Each speedup is normalized by the speedup achieved by the oracle (i.e., exhaustive exploration). In \(74\%\) cases (across both systems and power constraints), our PnP tuner identifies configurations that lead to \(>=0.95\times\) of the oracle speedups (assuming oracle as \(1.0\times\)). These results are obtained without executing the code. In contrast, BLISS and OpenTuner needs to execute code multiple times and achieves \(>=0.95\times\) of the oracle speedups in \(51\%\) and \(34\%\) cases for BLISS and OpenTuner respectively. The PnP tuner produces better results than BLISS and OpenTuner in \(83\%\) and \(78\%\) cases. Overall, the configurations predicted by our model lead to geometric mean speedups of \(1.19\times\), \(1.12\times\), \(1.13\times\), and \(1.14\times\) for power limits \(40W\), \(60W\), \(70W\), and \(85W\) on the Haswell system. In contrast, BLISS leads to speedups of \(1.11\times\), \(1.09\times\), \(1.09\times\), and \(1.11\times\) across these power constraints respectively. OpenTuner produces corresponding speedups of \(1.06\times\), \(1.0\times\), \(1.04\times\), and \(1.02\times\). On Skylake, our approach achieves geometric mean speedups of \(1.5\times\), \(1.25\times\), \(1.26\times\), and \(1.34\times\) across power constraints \(75W\), \(100W\), \(120W\), and \(150W\) respectively, compared to speedups of \(1.29\times\), \(1.2\times\), \(1.18\times\), and \(1.17\times\) produced by BLISS, and speedups of \(1.27\times\), \(1.13\times\), \(1.07\times\), and \(1.1\times\) produced by OpenTuner.
_Can performance counters further improve results?_ Although our approach leads to \(>=0.95\times\) of the oracle speedups in most cases, in approximately \(8\%\) of cases, our approach produces results which are \(<0.8\times\) of the oracle speedups. Previous works such as [27, 28] have used performance counters for tuning tasks. We borrow from these ideas to see if the results from our approach can be improved by using these as features (dynamic features). For this experiment, we update our model definition. We make no changes to the GNN layers. We repurpose the fully connected layers to accept as inputs five performance counters along with the ounuts from the GNN layers. We use PAPI [17] to collect counters related to L1, L2, L3 cache misses, number of instructions, and the number of
Fig. 3: Power Constrained Tuning (Skylake): Each chart shows results for a specific power constraint. Each bar-group shows geometric mean speedup for all OpenMP regions in an application over default OpenMP settings wrt the corresponding tuning approach. Speedups are normalized by oracle speedups. Normalized oracle speedups are always \(1.0\times\). The PnP tuner outperforms BLISS in \(85\%\) and OpenTuner in \(83\%\) cases across all power constraints(see Section IV-B for more details).
mispredicted branches for each OpenMP region. These were selected as these have direct impact on code execution and performance.
We perform the same experiments as outlined in the previous paragraphs. However, we only validate on those applications whose speedups are \(<0.95\times\) on the oracle speedups. We see that by including performance counters, this approach identifies configurations that lead to \(>=0.95\times\) in \(87.5\%\) cases (up from \(74\%\)). We show these results and comparisons in Figures 2 and 3. Therefore, a case can definitely be made for including performance counters for DL-based performance tuning. However, this comes at the additional cost of profiling. Profiling is necessary for generating the dataset to train the model. However, during inference, this approach (using both static and dynamic features) only needs to execute applications twice (to collect counters which serve as inputs to the model), which is less than other execution based tuners. To conclude, although this produces better results, it adds a profiling overhead. But during inference, in spite of this overhead it only needs two executions.
_Can we extend this approach to unknown power constraints?_ There might be scenarios where adding/removing new nodes to/from clusters, or other factors, might necessitate changing power constraints on nodes. Thus, our approach should also be generalizable to power constraints that our model has not been trained on, since data center policy changes may result in different power constraints being applied. To evaluate this scenario, we conduct four tests - two tests for each system - one test each for the lowest and highest power constraints considered in this paper. For each test, we first _exclude_ all measurements for the target power constraint (e.g., for the \(150\)W test on Skylake, for training, we use measurements from \(75\)W, \(100\)W, and \(120\)W only). We then train and validate our model using _leave-one-out cross-validation_ as before. This allows us to generalize for both unseen applications _and_ unseen power constraints. However, unlike the initial experiments which uses a _static-only_ approach, we use performance counters as part of the feature set in this experiment. This is to account for the variation in runtime behavior of parallel regions under varying power constraints. Static features cannot encapsulate such divergence in behavior. The input features and model is similar to the one described in Section IV-B. In addition to these features, we also input as feature the normalized power constraints for each feature set. This helps to associate runtime behavior (performance counters) with power limits.
Figures 4 and 5 shows that our model performs well in such scenarios for both the Skylake and Haswell systems, predicting configurations that are within 5% (i.e., \(\geq 0.95\) normalized speedup) of the best possible speedup in \(64\%\) cases and within \(20\%\) of the best possible speedups in \(85\%\) cases across both systems and four power constraints. On the Skylake systems, these tuning efforts lead to geometric mean speedups of \(1.29\times\) and \(1.36\times\) versus oracle speedups of \(1.44\times\) and \(1.59\times\) for power constraints of \(150\)W and \(75\)W respectively. On the Haswell system, these experiments produce speedups of \(1.13\times\) and \(1.17\times\) compared to oracle speedups of \(1.16\times\) and \(1.27\times\) for power constraints of \(85\)W and \(40\)W respectively.
The hyperparameters of the models used in these experiments are shown in Table II. Other parameter values may have minor differences between experiments.
### _Power and Performance Tuning_
With increasing financial and environmental impacts of high energy usage, energy efficiency is now as important as performance in the current HPC landscape. However, simply optimizing for energy consumption, as shown in Section I, may lead to lower performance.
Thus, in this section, we outline the second scenario mentioned at the beginning of this section. To this end, we build a GNN-based tuner that uses only static features, with the aim of identifying a combination of power constraints and OpenMP runtime configurations that can lead to performance improvement while reducing energy consumption. As in the previous experiments, we model our flow-aware code graphs using an RGCN network. The outputs from the GNN layers are fed into the dense layers. These layers are trained with
\begin{table}
\begin{tabular}{l l} \hline \hline
**Hyperparameter** & **Hyperparameter Values** \\ \hline Layers & RGCN (4), FCNN (3) \\ Activ. func. & Leaky ReLU, ReLU \\ Optimizer & AdamW (amargard) (Sec IV-B), Adam (Sec IV-C) \\ Learning Rate & 0.001 \\ Batch Size & 16 \\ Loss function & Cross Entropy Loss \\ \hline \hline \end{tabular}
\end{table} TABLE II: Deep Learning Model Hyperparameters.
Fig. 4: Power Constrained Tuning on unseen power constraints (Skylake): Geometric mean speedup over default OpenMP settings. Results normalized by the oracle speedup.
Fig. 5: Power Constrained Tuning on unseen power constraints (Haswell): Geometric mean speedup over default OpenMP settings. Results normalized by the oracle speedup.
the target of finding configurations that produce the best energy-delay product (EDP). Again, we use _leave-one-out cross validation_ to validate our model, and the process of assigning benchmark applications to the training and validation set is similar to that described in Section IV-B.
The configurations predicted in these experiments lead to within \(5\%\) of the oracle EDP improvements in \(45\%\) cases, and within \(20\%\) of the oracle improvements in \(69\%\) cases across the two systems. In comparison, BLISS reaches these numbers in \(35\%\) and \(45\%\) cases (Figure 6). OpenTuner reaches these numbers in \(22\%\) and \(40\%\) cases. Overall, the configurations predicted by our _static-only_ approach leads to geometric mean improvements of \(1.37\times\) and \(1.85\times\) on the Haswell and Skylake systems compared to \(1.31\times\) and \(1.69\times\) respectively achieved by BLIISS and \(1.21\times\) and \(1.49\times\) achieved by OpenTuner.
We have also analyzed the impact on execution time performance and energy consumption individually. Figure 7 shows the impact of tuning for EDP on execution time for both the Skylake and Haswell systems. Tuning for EDP leads to performance (time) improvement in \(84\%\) cases, and leads to slower execution than default settings in around \(16\%\) cases across both systems. On Skylake, all slowdowns are within \(20\%\) of the corresponding execution with all threads, while the geometric mean of all slowdowns are within \(14\%\) of the default executions. On the Haswell system, there are fewer slowdowns, but the slowdowns are more significant with the largest slowdown within \(30\%\) of the default all-threaded execution, with the geometric mean within \(23\%\) of the default settings. Overall, excluding the cases that lead to slowdowns, tuning for EDP leads to \(1.16\times\) and \(1.3\times\) speedups on the Haswell and Skylake. In comparison, BLISS and OpenTuner leads to slowdowns in \(28\%\) and \(46\%\) cases respectively, with the largest slowdowns within \(17\%\) and \(15\%\) for BLISS and within \(30\%\) and \(22\%\) for OpenTuner on Haswell and Skylake.
We also show in Figure 7 the impact of tuning for EDP on energy. Across both systems, our approach predicts configurations that lead to reduction in energy consumption in \(94\%\) cases. In the remaining \(6\%\) cases, it predicts configurations that consume more energy than the default setting. However, the increase is minimal. On the Haswell, there is a \(3\%\) geometric mean increase in energy usage for those predictions. On the Skylake, the corresponding number is \(1\%\). For the predictions that do lead to reduction in energy usage, there is a geometric mean greenup of \(1.25\times\) and \(1.42\times\) on the Haswell and Skylake respectively. In comparison, \(2\%\) of the predictions made by BLIISS lead to increase in energy consumption. But, the overall greenups are slightly worse than the PnP Tuner (\(1.24\times\) on the Haswell and \(1.39\times\) on the Skylake). The predictions made by OpenTuner lead to increase in energy consumption in \(20\%\) cases with overall greenups at \(1.25\times\) and \(1.29\times\) on the Haswell and Skylake respectively.
Similar to the experiment in Section IV-B, we also evaluate the effect of performance counters on EDP. As shown in Figure 6, adding performance counters to the feature set leads to improved results (predictions where the EDP is within \(5\%\) of the oracle moves up to \(57\%\) from \(45\%\) across both systems). Using performance counters leads to \(77\%\) cases where there is improvement in execution speed (down from \(84\%\)). This dichotomous behavior is the result of using a fused metric; because it is a product of both time and energy, the PnP tuner aims to tune for the best EDP. It might lead to scenarios where the reduction in energy might compensate for the increase in time. In this experiment, using performance counters leads to \(95\%\) cases where there are improvements in energy consumption. Overall, by using performance counters, the EDP predictions improve from \(1.37\times\) to \(1.52\times\) on the Haswell system, and from \(1.85\times\) to \(2.31\times\) on the Skylake. This lead to overall speedups of \(1.13\times\) and \(1.39\times\) on the Haswell and Skylake and greenups of \(1.35\times\) and \(1.60\times\) on the Haswell and Skylake systems.
## V Related Work
This paper proposes a GNN based technique towards performance, power, and energy optimizations. Modifying runtime and environmental parameters has a large impact on parallel code execution. Several search-based tuners such as [3, 4] have been proposed for these tasks. These tuners have proposed and used several search techniques such Nelder-Mead, Torczon hillclimbers, AUC Bandit for pruning and optimizing the search space. An alternative to search-based auto-tuning is to use machine learning (ML) based approaches. Several works such as [5, 27, 28, 29, 30, 31, 32, 33] have proposed machine/deep learning based auto-tuners or tuning approaches for various parameter
Fig. 6: Improvement in EDP over default OpenMP configurations for each application for the Haswell system. EDP improvements normalized in terms of best achievable EDP improvement.
tuning tasks. Recently, Bayesian optimization (BO) has been used in several works (such as [6, 29]) for faster sampling of search spaces. BO has gained popularity as it can be used as a black-box optimizer for an unknown objective function. A drawback of most of these aforementioned techniques is their need for multiple sampling executions. Although faster than brute-force tuning, it is still a big overhead. Our proposed static approach aims to overcome this overhead by using a _static-only_ approach, that does not need to execute code for tuning a fixed set of parameters.
Most of the works mentioned above do not consider power constraints in their work. A number of papers have focused on dynamic voltage frequency scaling (DVFS) and dynamic concurrency throttling (DCT) techniques for improving energy efficiency [34, 35, 36]. Wang et al. in [37] proposed using CPU clock modulation and concurrency throttling for improving the energy efficiency of OpenMP loops. In [38], Nandamuri et al. analyzed the performance and energy consumption of OpenMP programs under various conditions using OpenMP Runtime API. The work in [39] presented the performance and energy impact of CPU parameters on runtime systems on dense linear algebra OpenMP kernels. In contrast to these works, our approach focuses on reducing energy consumption and performance improvement through power constraints.
Rountree et al. in [40] provided a first insight into the impacts of power capping or constraints on power and performance. Patki et al. in [41] outlined how overprovisioning hardware and hardware enforced power bounds leads to improved performance. To the best of our knowledge, the works in [8, 7] are the closest to this work. Bari et al. in [7] propose ARCS with the goal of automatically selecting best runtime configurations for OpenMP parallel regions at specified power constraints and in [8] analyzed the impact of power constraints on performance and energy consumption on five NAS benchmarks. In contrast to [8, 7], our approach uses an AI-assisted technique based on GNNs to identify OpenMP runtime configurations and power constraints.
## VI Discussion
Through this study, we have outlined a unique approach to two important problems in the HPC community. We have proposed a mechanism of tuning OpenMP configurations on power constrained systems. This is beneficial to data centers and systems working under strict power budgets. As shown in previous sections, it is possible to considerably improve performance in such scenarios using our approach. Additionally, we also describe a method of identifying OpenMP configurations and power constraints that can lead to reduction in energy consumption with limited to no impact on execution time. To the best of our knowledge, this is the first work that aims to use GNN based techniques for these purposes.
As with all DL techniques, training is an overhead. Retaining a model for several target systems might be burdensome. However, by using transfer learning techniques, we have reduced the training time on other systems by around \(76\%\) on a dataset of similar size (explained in Section IV-B. These optimizations can enable faster and easier deployment of such approaches on multiple systems.
Additionally, being a static approach, our tuner requires no sampling executions. This is in contrast to other tuners that need several sampling runs. Limiting the number of sampling runs, or setting a time-bound on the sampling phase to a small value leads to less than optimal results. Moreover, our approach was able to successfully identify most edge cases.
Fig. 7: Speedups/Greenups over default OpenMP configurations at TDP. Configurations are predicted to optimize for EDP.
For example, the OpenMP region in trisolv has the fastest execution with 1 thread in all cases. This is an outlier. Our approach could identify near optimal configurations in these cases as well with no executions.
Due to installation issues, we were not able to directly use the APEX framework described in [7]. To overcome this, we used OpenTuner, another search-based tuner as a replacement. To contrast our work with other tuners, we present the following example. To tune an OpenMP region, BLISS needs 20 sampling runs for each code region. In case of OpenTuner, the "stop-after" flag must be manipulated to allow the tuner to sample code executions. The time bound must be increased for more complex applications, and in most cases in this paper was set to \(180\) seconds and above. A trained PnP tuner, on the other hand, needs no code executions.
## VII Conclusion
In this work, we have outlined a twofold approach towards tuning OpenMP configurations in power constrained systems, as well tuning both OpenMP configurations and power constraints for execution time and energy consumption gains. We have used GNNs to model flow-aware code graphs to model the semantic and structural features of code regions. Our experiments show that the PnP Tuner can identify configurations that lead to improvements in execution time and energy consumption. In future, we aim to analyze the scalability of our approach to heterogeneous platforms and handheld devices.
## VIII Acknowledgements
This research was supported by the National Science Foundation under Grant number 2211982. We would also like to thank the ResearchIT team (_[https://researchit.las.iastate.edu_](https://researchit.las.iastate.edu_)) at Iowa State University for their constant support.
|
2305.06802 | Physics-Informed Neural Networks for Discovering Localised Eigenstates
in Disordered Media | The Schr\"{o}dinger equation with random potentials is a fundamental model
for understanding the behaviour of particles in disordered systems. Disordered
media are characterised by complex potentials that lead to the localisation of
wavefunctions, also called Anderson localisation. These wavefunctions may have
similar scales of eigenenergies which poses difficulty in their discovery. It
has been a longstanding challenge due to the high computational cost and
complexity of solving the Schr\"{o}dinger equation. Recently, machine-learning
tools have been adopted to tackle these challenges. In this paper, based upon
recent advances in machine learning, we present a novel approach for
discovering localised eigenstates in disordered media using physics-informed
neural networks (PINNs). We focus on the spectral approximation of Hamiltonians
in one dimension with potentials that are randomly generated according to the
Bernoulli, normal, and uniform distributions. We introduce a novel feature to
the loss function that exploits known physical phenomena occurring in these
regions to scan across the domain and successfully discover these eigenstates,
regardless of the similarity of their eigenenergies. We present various
examples to demonstrate the performance of the proposed approach and compare it
with isogeometric analysis. | Liam Harcombe, Quanling Deng | 2023-05-11T13:51:21Z | http://arxiv.org/abs/2305.06802v2 | # Physics-Informed Neural Networks for Discovering Localised Eigenstates in Disordered Media
###### Abstract
The Schrodinger equation with random potentials is a fundamental model for understanding the behavior of particles in disordered systems. Disordered media are characterised by complex potentials that lead to the localisation of wavefunctions, also called Anderson localisation. These wavefunctions may have similar scales of eigenenergies which poses difficulty in their discovery. It has been a longstanding challenge due to the high computational cost and complexity of solving the Schrodinger equation. Recently, machine-learning tools have been adopted to tackle these challenges. In this paper, based upon recent advances in machine learning, we present a novel approach for discovering localised eigenstates in disordered media using physics-informed neural networks (PINNs). We focus on the spectral approximation of Hamiltonians in one dimension with potentials that are randomly generated according to the Bernoulli, normal, and uniform distributions. We introduce a novel feature to the loss function that exploits known physical phenomena occurring in these regions to scan across the domain and successfully discover these eigenstates, regardless of the similarity of their eigenenergies. We present various examples to demonstrate the performance of the proposed approach and compare it with isogeometric analysis.
keywords: Schrodinger equation, Hamiltonian, Anderson localisation, eigenvalues and eigenstates, neural networks, isogeometric analysis +
Footnote †: journal: arXiv
## 1 Introduction and Background
Partial differential equations (PDEs) are powerful tools for studying dynamic systems and have applications across various fields. Despite their usefulness, these equations are notoriously difficult to solve explicitly, leading to an increased interest in numerical approximations. In addition to classical numerical
methods, recent research has shown the potential of using neural networks to estimate solutions to differential equations [26; 27; 31]. There are several advantages of using neural networks to solve PDEs over traditional numerical methods. Firstly, numerical errors are usually not accumulated [33]. Secondly, they are more robust against the "curse of dimensionality," which refers to the difficulty of solving PDEs in high-dimensional spaces using traditional numerical methods [19]. Furthermore, they can be trained to solve a wide range of PDEs, including those with complex boundary conditions or nonlinear operators [16]. Once trained, neural networks can quickly produce solutions for new PDEs that are similar to those encountered during training, potentially without requiring further adjustments or recalibration of parameters [13]. Overall, neural networks can provide a powerful and flexible tool for solving PDEs, especially in cases where traditional numerical methods may be insufficient or computationally expensive.
Data-driven supervised networks [15] and data-free unsupervised networks [23] have been shown to produce efficient approximations of differential eigenvalue problems. Supervised networks aim to discover patterns in labelled datasets to infer a general model that is able to predict examples outside of the given dataset. The training process involves calculating approximations for given inputs and computing how much these estimations deviate from the true outputs using a loss function. Then, the networks work to minimise this loss function by adjusting its estimation process. Artificial Neural Networks (ANNs) are comprised of layers of interconnected nodes with weights and biases that calculate the estimations. Convolutional Neural Networks (CNNs) are a newer form of neural networks that are especially known for their ability to identify patterns faster and more reliably than ANNs in solving eigenvalue problems using labelled datasets [15]. They introduce a convolutional layer built up of windows that slide around the input data, searching for patterns in whole regions at a time. They are commonly used to pass information on to ANNs and are optimised similarly. In situations where training data is unavailable or difficult to obtain, unsupervised networks are employed as a solution, operating without a labelled dataset. In these situations, more attention is drawn to the design of the network's loss function, as there is no labelled data to compare predictions with [7].
In the case of finding the eigenstates of Hamiltonians, we leverage known physical phenomena to design our loss function. This kind of model is an example of a Physics Informed Neural Network (PINN, c.f., [4; 35]), one that embeds the knowledge of physical laws governing the system into the learning process to drive the network to an admissible approximation.
Differential eigenvalue problems occur in a wide range of problems in physics and applied mathematics, such as quantum energy problems. In early works, Lagaris et al. [27] proposed an ANN to discover the solu
tions to differential eigenvalue problems, tested on various problems in quantum mechanics. More recently, Finol et al. [15] presented a supervised ANN to solve eigenvalue problems in mechanics, showing that CNNs were outperforming traditional ANNs in contexts with labelled datasets. Sirignano and Spiliopoulos [36] developed a deep learning network to accurately solve partial differential equations in dimensions as high as 200. They also proposed a mesh-free algorithm, which was desirable as meshes become infeasible in such high dimensions. Chen et al. [6] adopted the emerging PINNs to solve representative inverse scattering problems in photonic metamaterials and nano-optic technologies. Their method was also mesh-free and was tested against numerical simulations based on the finite element method. These ANN models are excellent at learning time series data. The work [32] demonstrated its application in wave packet displacements in localised and delocalised quantum states. Another work is [25], where they adopted CNNs to solve a classification problem. The authors constructed a supervised model learning from experimental data to distinguish between the dynamics of an Anderson insulator and a many-body localised phase. Yang et al. [38] proposed a combination of PINNs with Bayesian Neural Networks, which solved both forward and inverse nonlinear problems, obtaining more accurate predictions than PINNs in systems with large noises.
Jin et al. [23] have shown that PINNs can solve single and multiple square well problems, calculating the eigenfunction and eigenvalue simultaneously. Their network employs a scanning mechanism that pushes the network to search for higher eigenvalues as the training process evolves, while the network updates the eigenfunction prediction accordingly. They store found eigenfunctions and exploit the orthogonality of wave functions to search for higher eigenstates. Grubisic et al. [18] applied PINNs to identify localised eigenstates of operators with random potential. They studied the effective potential of the operator, whose local minima correspond to the different eigenstates. They first built a PINN to solve these eigenvalue problems in one dimension, then generalised to a deeper model that solves higher dimensional problems. Effective potentials provide a neat way of locating the localised eigenstates, however, are limited in the number of locations they can identify, and have difficulty differentiating between eigenstates of identical eigenvalue. These limitations occur in Bernoulli distributed potentials, which we aim to solve with our model.
We build on the design of [23] to approximate these eigenstates for Hamiltonians whose potential is distributed randomly, leading to localisation of the solution to specific regions of the domain. In particular, the core of our work is in finding these eigenstates where the eigenenergies are nearly identical, a task that current models are unable to accomplish. This is most prevalent with potentials distributed according to the Bernoulli distribution but can occur in other distributions.
The rest of the paper is organised as follows. In Section 2, we state the Schrodinger differential eigen
value problem and present the isogeometric analysis, followed by a discussion on the challenges of its spectral approximation. In section 3, we propose our novel loss function design for the PINN for solving the Schrodinger equation. Section 4 collects and discusses various numerical tests to demonstrate the performance of the proposed method. Concluding remarks are presented in section 5.
## 2 Problem Statement and Isogeometric Analysis
In this section, we first introduce the modeling problem, followed by a presentation of a classic numerical method, namely isogeometric analysis, used to solve the problem. We then provide a general discussion on the challenges associated with the numerical computation of the model.
### Problem Statement
We study the time-independent Schrodinger equation: Find the eigenpair \((E,u)\) such that
\[\begin{split}\left[-\frac{\hbar^{2}}{2m}\Delta+V\right]u& =Eu\quad\text{in}\quad\Omega,\\ u&=0\quad\text{on}\quad\partial\Omega,\end{split} \tag{2.1}\]
where \(\Delta=\nabla^{2}\) is the Laplacian, \(V=V(x)\in L^{2}(\Omega)\) specifies the potential and is a non-negative function, and \(\Omega\subset\mathbb{R}^{d},d=1,2,3,\) is a bounded open domain with Lipschitz boundary \(\partial\Omega\). Throughout the paper, we will focus on the case with \(d=1\), followed by an example of an extension to \(d=2\) in Section 4.3. The differential operator is referred to as the Hamiltonian, i.e., \(\mathcal{H}=-\frac{\hbar^{2}}{2m}\Delta+V\). Here, \(\hbar\) is the reduced Planck constant and \(m\) is the mass of the particle. \(\frac{\hbar^{2}}{2m}\) is a constant that we can divide throughout equation 2.1 and absorb into \(V\) and \(E\). Thus, finding the eigenpairs in equation 2.1 is equivalent to finding those for the following equation, up to a scalar:
\[\begin{split}-\Delta u+Vu&=Eu\quad\text{in}\quad \Omega,\\ u&=0\quad\text{on}\quad\partial\Omega.\end{split} \tag{2.2}\]
This problem is a Sturm-Liouville eigenvalue problem (\(-\Delta+V\) is a diffusion-reaction operator, see, for example, [14; 37]) which has a countable infinite set of positive eigenvalues \(E_{j}\in\mathbb{R}^{+}\)
\[0<E_{1}<E_{2}\leq\cdots\leq E_{j}\leq\cdots \tag{2.3}\]
with an associated set of orthonormal eigenfunctions \(u_{j}\)
\[(u_{j},u_{k})=\int_{\Omega}u_{j}(x)u_{k}(x)\;\text{d}\mathbf{x}=\delta_{jk}, \tag{2.4}\]
where \(\delta_{jk}\) is the Kronecker delta which is equal to 1 when \(j=k\) and 0 otherwise. The set of all the eigenvalues is the spectrum of the Hamiltonian. We normalise the eigenfunctions in the \(L^{2}\) space; hence, the eigenfunctions are orthonormal under the scalar inner product. Let us define two bilinear forms
\[a(w,v)=\int_{\Omega}\nabla w\cdot\nabla v+Vwv\;\text{d}\mathbf{x}\quad\text{and} \quad b(w,v)=(w,v)=\int_{\Omega}wv\;\text{d}\mathbf{x},\quad\forall w,v\in H^{1}_{ 0}(\Omega), \tag{2.5}\]
where \(H^{1}_{0}(\Omega)\) is a Sobolev space with functions vanishing at the boundary \(\partial\Omega.\) Using this notation, the eigenfunctions are also orthogonal with each other with respect to the energy's inner product, i.e.,
\[a(u_{j},u_{k})=E_{j}(u_{j},u_{k})=E_{j}\delta_{jk}. \tag{2.6}\]
We remark that these orthogonalities are critical in the development of the proposed neural networks in Section 3.
### Isogeometric Analysis
At the continuous level, the weak formulation for the eigenvalue problem (2.2) is: Find all eigenvalues \(E\in\mathbb{R}^{+}\) and eigenfunctions \(u\in H^{1}_{0}(\Omega)\) such that,
\[a(w,u)=Eb(w,u),\quad\forall\;w\in H^{1}_{0}(\Omega), \tag{2.7}\]
while at the discrete level, the isogeometric analysis (IGA, c.f., [8; 10; 12; 22]) for the eigenvalue problem (2.2) is: Find all eigenvalues \(E_{h}\in\mathbb{R}^{+}\) and eigenfunctions \(u_{h}\in W_{h}\) such that,
\[a(w_{h},u_{h})=E_{h}b(w_{h},u_{h}),\quad\forall\;w_{h}\in W_{h}(\Omega), \tag{2.8}\]
where \(W_{h}\subset H^{1}_{0}(\Omega)\) is the trial and test space spanned by the B-spline or non-uniform rational basis spline (NURBS) basis functions [9].
In this paper, for the purpose of comparison with the state of the art, we adopt the recently-developed soft isogeometric analysis (softlGA, c.f., [11; 29]). SoftlGA has a similar variational formulation as (2.8) which leads to the matrix eigenvalue problem
\[\mathbf{K}\mathbf{U}=E_{h}\mathbf{M}\mathbf{U}, \tag{2.9}\]
where \(\mathbf{K}_{jk}=a(\phi_{j},\phi_{k}),\mathbf{M}_{jk}=b(\phi_{j},\phi_{k}),\) and \(\mathbf{U}\) contains the coefficients of the eigenfunction \(u_{h}\), for its representation in the linear combination of the B-spline basis functions. For simplicity, the matrix \(\mathbf{K}\) (although it contains a scaled mass) is referred to as the stiffness matrix while the matrix \(\mathbf{M}\) is referred as the mass matrix, and \((E_{h},\mathbf{u}_{h})\) is the unknown eigenpair. We refer to [11; 29] for details.
### A Few Challenges on Numerical Computation
There are a few challenges in the numerical computation, such as using IGA or softIGA, of the Schrodinger equation with random potentials. Firstly, there is a non-uniqueness of solutions. The Schrodinger equation with random potentials can have multiple solutions that correspond to the same (or very similar) energy level. This non-uniqueness can make it difficult to accurately identify the correct eigenstates, particularly when dealing with complex potential landscapes. Secondly, it can be a multi-scale problem with large sampling/discretisation errors. Random potentials can vary over multiple length scales, which leads to inaccurate numerical simulations and can make it challenging to choose an appropriate discretisation size and mesh. Thirdly, the Schrodinger equation for many-body problems in multiple dimensions suffers from the curse of dimensionality. The number of parameters needed to describe the random potential can be very large. As the number of dimensions increases, the computational cost of solving the Schrodinger equation can increase exponentially.
Last but not least, the Schrodinger equation with random potentials is a complex mathematical problem, and solving it numerically can be computationally expensive. This is particularly true for large system sizes and high disorder strengths, which require a large number of numerical simulations. Moreover, solving the Schrodinger equation with a different random potential requires constructing a new matrix eigenvalue problem (2.9), which can be a time-consuming process. This can become particularly challenging when solving the equation for a large number of random potentials. In such cases, one potential solution is to train a neural network that takes the random potential as input and produces the corresponding eigenstates as outputs. For instance, to solve the Schrodinger equation with \(N\) different random potentials, one could use a classic method like softIGA and apply it \(N\) times, or alternatively, train a neural network using data from a subset of cases and then use the well-trained neural network to solve the remaining cases. This approach can save a significant amount of computational time, particularly when \(N\) is large. With these challenges in mind, we introduce the following neural-network-based method as an alternative method to solve the time-independent Schrodinger equations.
## 3 The Physics-Informed Neural Network
Deep neural networks are a type of machine learning algorithm inspired by the structure and function of the human brain. They are composed of multiple layers of artificial neurons, each performing a nonlinear transformation of the input data. The output of each layer is fed as input to the next layer, allowing the network to learn increasingly complex features and relationships in the data.
Some common types of deep neural networks include convolutional neural networks (CNNs) for image processing and computer vision tasks [28], long short-term memory models (LSTMs) for sequential data processing [21], deep belief networks (DBNs) for unsupervised learning [20], and generative adversarial networks (GANs) for image and video synthesis [17]. Deep neural networks have demonstrated impressive performance in many applications. In this paper, we adopt the physics-informed neural networks (PINN) with a novel loss function design to solve the Schrodinger equations.
### Pinn
The Physics-Informed Neural Networks (PINNs) are a type of feed-forward neural networks (FNNs) that incorporate physics laws, typically in the form of partial differential equations (PDEs), into their loss functions [30; 35]. This combination of neural networks and PDEs makes PINNs a powerful tool for solving complex PDEs in various scientific and engineering applications. The overall process of using PINNs to solve a PDE can be summarized as follows. First, a neural network architecture is constructed. This involves defining the structure of the network, including the number and activation functions of its layers, as well as the number of neurons in each layer. Next, the loss function is defined. The loss function quantifies the discrepancy between the neural network's predictions and the actual solution to the PDE. In the case of PINNs, the loss function incorporates terms that enforce the PDE itself, as well as any associated initial or boundary conditions. Once the neural network and loss function are established, the network is trained using an optimization algorithm, such as the Adam optimiser [24]. During the training process, the neural network adjusts its weights and biases to minimise the loss function, gradually improving its ability to approximate the PDE solution. After training, the accuracy of the neural network's predictions can be evaluated by comparing them against calculations made by current numerical methods, such as isogeometric analysis [8]. If the predictions are not satisfactory, further refinement can be performed. This may involve modifying the network architecture, adjusting the loss function, or selecting a different optimization algorithm. The refined network is then retrained to improve its performance. Our PINN implementation is summarised in Algorithm 1.
Typically, the loss function in PINN is defined as
\[L=L_{\text{de}}+L_{\text{reg}}, \tag{3.10}\]
where \(L_{\text{de}}\) specifies the loss associated with the PDE and \(L_{\text{reg}}\) specifies the loss associated with the regularisation such as the boundary conditions. The loss function term \(L_{\text{de}}\) involves derivatives which are usually evaluated by the automatic differentiation by Tensorflow [1] or by Pytorch [34]. In the following subsection,
we present a goal-oriented loss function for the best performance in solving the Schrodinger equations with random potentials. A scale factor is assigned to each regularisation term in \(L_{\text{reg}}\). These scale factors are considered hyperparameters and are tuned to emphasise certain constraints that the network should adhere to in its prediction.
### Loss Function Design
To demonstrate the main idea, we simplify our focus to the one-dimensional case. We assume that the potential \(V(x)\) is randomly distributed over \(m\) regions over the domain \([0,1]\), and each region has a random value. Each set of random values then characterises the individual differential eigenvalue problem to be solved for Anderson localised states. We partition the interval \([0,1]\) into \(n+1\) equally spaced points \(x_{j}=jh,j=0,1,\cdots,n\) where \(h=1/n\) is the grid/mesh size. We denote by \(\mathbf{u}_{h}=(u_{h}(x_{0}),u_{h}(x_{1}),\cdots,u_{h}(x_{n}))^{T}\) as a vector of the approximate values of the true solution \(u(x)\) at each of these coordinates \(x_{j}\). Similarly, we denote by \(\mathbf{V}=(V(x_{0}),V(x_{1}),\cdots,V(x_{n}))^{T}\) as a vector of the approximate values of the potential \(V(x)\). In our PINN model, we choose \(n\) such that each region has exactly 5 nodes each for accuracy as well as for training efficiency. To approximate the second derivative vector \(u^{\prime\prime}(x)\), we use the center finite difference method with an accuracy order of six, i.e., the error is of \(\mathcal{O}(h^{6})\).
We construct a PINN that splits into two separate networks: one that calculates the eigenvalue \(E\), and the other one produces the eigenfunction \(u(x)\) at certain nodes. These two networks are trained and optimised simultaneously, with their outputs used in the same loss function described below. We consider the combination of these two networks as one network and apply it to discover the eigenstates. Figure 3.1 draws this model structure, highlighting the split into separate networks. The network section finding the eigenvalue has two hidden layers with \(n\) nodes, while the network for finding the eigenfunction has three hidden layers with \(n\) nodes. This allows the network to scale with the size of the mesh. Each layer in both networks uses the ReLU activation function \(f(x)=\text{max}(x,0)\), and the loss minimisation is done using the Adam optimiser with a learning rate of \(5\times 10^{-4}\).
The driving force of this network towards a correct solution is the design of its loss function, which is built of multiple terms relating to different aspects of a correct solution. The neural network's optimisation function works to minimise the loss function, so we choose terms that are positive errors for incorrect predictions of \(E\) and \(u(x)\), and are zero for correct predictions. Thus, the minimisation of these terms leads the network to produce correct predictions for \(E\) and \(u(x)\). During training, we choose to input an initial vector of ones (of length \(n\)), which the network uses as a basis to build its predictions of \(E\) and \(u(x)\) from.
The central term of our loss function is the term:
\[L_{\text{de}}=\sqrt{\frac{1}{n+1}\sum_{i=0}^{n}\left(E_{h}u_{h}(x_{i})+u_{h}^{ \prime\prime}(x_{j})-V(x_{j})u_{h}(x_{j})\right)^{2}} \tag{3.11}\]
which is always non-negative and is zero for solutions \(E\) and \(u(x)\) that satisfy equation 2.2. To encourage the network to satisfy the boundary conditions \(u(0)=u(1)=0\), we create the term:
\[L_{\text{bound}}=u_{h}^{2}(x_{0})+u_{h}^{2}(x_{n}) \tag{3.12}\]
which the minimisation of the loss function leads to \(u_{h}(x_{0})=u_{h}(x_{n})=0\). The major issue with using just these terms is that the trivial solution \(u(x)=0\) satisfies these terms (reducing them to zero). Thus, we need the following term to penalise the approximate solution:
\[L_{\text{norm}}=\left(\int_{0}^{1}u_{h}^{2}(x)\text{d}\mathbf{x}-1\right)^{2}. \tag{3.13}\]
The minimisation of the loss function leads to approximate solutions of \(L^{2}\) norm 1. Integration is calculated using the mid-point rule and this calculation is majorly local. In the PINN model developed in [23], this
Figure 3.1: Neural network structure for the prediction of eigenstates of Hamiltonians.
condition was imposed using the terms like \(1/u^{2}(x)\) and \(1/E^{2}\). We point out that since the eigenstates are localised, an eigenstate \(u(x)=0\) for most of the domain, leads to very large and even infinite values due to the design of the term \(1/u^{2}(x)\). Consequently, this leads to difficulties in minimising the loss function and the overall neural network training.
These terms are sufficient to push the network to generate an admissible eigenstate. However, the model often produces the same eigenstates, preventing the discovery of higher states. To overcome this issue, once an eigenstate \((E_{h},u_{h}(x))\) is discovered (using a patience condition described below), we add the eigenstate \(u_{h}(x)\) to a list of eigenstates \(S\). Then, we stop the training, recompile the network and begin training again, where we take advantage of the known physical fact that the eigenfunctions of Hamiltonians are orthogonal and incorporate the following term into the loss function:
\[L_{\text{orth}}=\sum_{\hat{u}_{h}(x)\in S}\left(\int_{0}^{1}\hat{u}_{h}(x)u_{h }(x)dx\right)^{2}, \tag{3.14}\]
where \(u_{h}(x)\) is the network's current state to be produced. We iteratively repeat this process of adding eigenstates to \(S\) and recompiling the network. This iteration is automated and can be done an arbitrary number of times. Sometimes the network will skip an eigenstate and converge to a higher one, however, with enough iterations the network tends to find these states eventually. To decide whether a prediction is admissible, we determine whether \(L_{\text{de}}\) is below a certain threshold, as well as a patience condition as in [23] that checks if the absolute value of the change in \(L_{\text{de}}\) and that of the eigenvalue is below another threshold. This allows the network to converge to solutions with \(L_{\text{de}}\) much lower than the chosen threshold, given that it is converging fast enough.
The loss function we have built so far performs adequately on problems with enough spread in the eigenvalues between different states. However, in situations where the eigenstates have nearly identical eigenvalues, the network is unable to distinguish between the states, converging to an undesirable linear combination of the eigenfunctions. This occurs because eigenspaces form vector spaces, so a linear combination of eigenfunctions of the same eigenvalue \(E\) will be another eigenfunction of eigenvalue \(E\), giving another eigenstate that would satisfy \(L_{\text{de}}\) and all the regularisation terms defined. When studying Hamiltonians with potentials generated randomly, there is a disorder for Anderson localisation to occur. We leverage this fact in our loss function with the following procedure.
We let the model run with the above loss function for a certain number of epochs \(q\), which is a hyperparameter. The goal is to set this parameter so that at epoch \(q\), the network has converged to a linear
combination of the eigenfunctions, for example in Figure 3.2.
Since we know the eigenstates of the Hamiltonian are localised, each spike in 3.2 represents a separate localised state. We thus scan through the eigenfunction and generate a list \(K\) of the regions of the domain where the norm of the eigenfunction is greater than some set hyperparameter, splitting apart the regions where the function returns to zero to separate the localised spikes. Then, in our iterative process of discovering eigenvalues, at each step, we choose one of the intervals in \(K\) (say, \([x_{a},x_{b}]\)) and add the following term to the loss function:
\[L_{\text{loc}}=\left(\int_{0}^{1}u_{h}^{2}(x)\text{d}\mathbf{x}-\int_{x_{a}}^{x_{b }}u_{h}^{2}(x)\text{d}\mathbf{x}\right) \tag{3.15}\]
which encourages the network to set the eigenfunction to zero outside of the localisation interval \([x_{a},x_{b}]\). Equivalently, we could have added terms like \(L_{\text{end}}\) for each nodal point outside of \([x_{a},x_{b}]\). We remark that this term is non-negative. With all these loss terms in mind, we have
\[L_{\text{reg}}=\alpha_{\text{bound}}L_{\text{bound}}+\alpha_{\text{norm}}L_{ \text{norm}}+\alpha_{\text{orth}}L_{\text{orth}}+\alpha_{\text{loc}}L_{\text {loc}}, \tag{3.16}\]
which is added to \(L_{\text{de}}\) to give the overall loss function \(L\) in (3.10). The \(\alpha\) terms in \(L_{\text{reg}}\) scale each term in the loss function, providing emphasis to certain physical constraints. Through hyperparameter tuning, we found that the following values provided adequate predictions for the number of nodes we used (order of magnitudes \(10^{2}\) to \(10^{3}\))
\[\alpha_{\text{bound}}=n^{2},\quad\alpha_{\text{norm}}=n^{3},\quad\alpha_{ \text{orth}}=n^{3},\quad\alpha_{\text{loc}}=n^{4}. \tag{3.17}\]
Figure 3.2: Left: An example of a linear combination of eigenstates where their eigenenergies are approximately equal. Right: The corresponding potential \(V(x)\) with \(m=80\) uniform elements, where \(V(x)\) was randomly generated in each element according to the normal distribution with mean 1 and standard deviation 0.3, scaled by \(10^{6}\).
The idea of having a large weight on the regularisation terms of the loss function is to ensure the network prioritises minimising these terms before working to a solution satisfying the differential equation (i.e. before minimising \(L_{\text{de}}\)). This also penalises the network heavily for straying its solution from the regularisations in \(L_{\text{reg}}\), working to project the parameter search space to a subspace adhering to the regularisations. The method described above is presented in Algorithm 1. The purpose of this algorithm is to discover the eigenstates for a given potential. Throughout the paper, we use this loss function in our proposed PINN with the structure shown in Figure 3.1.
## 4 Numerical Experiments
We test our PINN model on various Hamiltonians with randomly distributed potential and demonstrate the network's robustness against potential with different distributions. In particular, we consider potentials with distributions that cause the Hamiltonian to admit eigenstates with almost identical eigenenergies. This is most prevalent in potentials distributed according to the Binomial distribution. We test the network for different numbers of randomly generated regions (\(m\)), and choose the number of nodes in the mesh (\(n\)) such that each region contains 5 nodes (that is, we choose \(n=5m\)).
The localised states usually occur in the region where the potential takes its smallest value. When there are multiple disjoint regions at (or very close to) this minimum value, each of these regions can admit a localised state, where they all can have similar eigenenergy. Figure 4.3 shows the first four PINN approximated eigenstates where the potential is shown in the right plot of Figure 3.2. The network can distinguish between the first three eigenstates, regardless of how similar their eigenvalues are.
We observe that if \(u(x)\) satisfies equation 2.2, then \(-u(x)\) also does. \(-u(x)\) also has the same \(L^{2}\) norm as \(u(x)\), so it satisfies all other terms in our loss function. This is relevant as the network will choose to converge to either \(u(x)\) or \(-u(x)\). For consistency, we plot all eigenfunctions in their positive form.
Figure 4.4 shows a potential with a Bernoulli distribution, while Figure 4.5 shows the first four eigenstates, the first three of which have almost identical eigenvalues. We expect these eigenstates to localise in the "troughs" of the potential, which is what we observe in the eigenfunction plots. The states that localise in troughs of similar length tend to have almost the same eigenvalue, which our model captures.
Figure 4.6 shows the model's prediction of the eigenvalue as the training process evolves. After 2000 epochs, the model adds the \(L_{\text{loc}}\) term. The model produces converging eigenvalues until around epoch 9000. At this point, the model's prediction has met the convergence criteria and saves the eigenstate to use in \(L_{\text{orth}}\) and it resets its weights to search for the next eigenstate. In our implementation, we include a function that
```
1: Let \(k\) be the epoch where \(L_{\text{loc}}\) is added to the loss function
2: Let \(j=1\)
3:while\(j\leq\) num_eigenstates do
4: Initiate the model
5:while training do
6: Calculate prediction of \(E\) and \(u(x)\)
7: Compute \(L_{\text{de}}\), \(L_{\text{bound}}\), \(L_{\text{norm}}\)
8: Compute \(L_{\text{orth}}\) using all stored eigenfunctions
9:if current epoch \(=k\)then
10: Scan predicted eigenfunction for localisation sections
11: Choose the localisation section \(S\) with greatest norm
12: Reset model weights
13:elseif current epoch \(>k\)then
14: Compute \(L_{\text{loc}}\) across \(S\)
15:endif
16: Backpropagate and step
17:if patience condition and \(L_{\text{de}}<\) threshold then
18: Store prediction of \(E\) and \(u(x)\) for future \(L_{\text{orth}}\) calculation
19:\(j=j+1\)
20:break while
21:endif
22:endwhile
23:endwhile
```
**Algorithm 1** Eigenstate Discovery Using PINN
generates a video to display the network's eigenfunction prediction as the training process evolves, which is available at our GitHub1 page.
Footnote 1: [https://github.com/liamharcombe4/eigen-network](https://github.com/liamharcombe4/eigen-network)
Table 1 displays the PINN eigenvalue errors where we used the reference eigenvalues generated by the softIGA method with a very fine mesh (2000 nodes). We also test the model's robustness against the number of nodes. We note that the model achieves slightly better results for the potential distribution according to the normal distribution compared to the uniform distribution. However, the results for the Bernoulli distributed potentials are much better, all with relative errors below 1%. This demonstrates the success of our approach to the problem of eigenstates with similar eigenvalue. Table 2 displays the eigenvalue prediction results for a single potential generated by the Bernoulli distribution with a different number of mesh elements. As the number of elements increases, there is a clear improvement in the accuracy of the PINN-predicted
Figure 4.3: The first four PINN approximated eigenstates with the potential defined in the right plot of Figure 3.2. Here, ev refers to the eigenvalue associated with the plotted eigenfunction.
eigenvalues. The convergence rate is of order approximately 1 but the theoretical analysis of this convergence is subject to future study.
### Loss Terms Analysis
Another potential based on the Bernoulli distribution is plotted in Figure 4.7, with the network's eigenstate predictions in Figure 4.8. During the training process of the network for these approximations, the value of each term of the loss function is saved every 100 epochs. Figure 4.9 displays the evolution of these terms throughout the training process.
\begin{table}
\begin{tabular}{||c c c|c||} \hline Potential Distribution & Nodes (\(n+1\)) & Regions (\(m\)) & Eigenvalue Error (\(\%\)) \\ \hline \hline Normal & 400 & 80 & 0.863 \\ Normal & 200 & 40 & 1.16 \\ Normal & 100 & 20 & 0.123 \\ \hline Bernoulli & 200 & 40 & 0.351 \\ Uniform & 200 & 40 & 1.92 \\ \hline \end{tabular}
\end{table}
Table 1: The relative eigenvalue errors of the PINN-approximated first eigenenergy of Hamiltonians with potentials generated according to various distributions.
Figure 4.4: The potential \(V(x)\) with \(m=40\) uniform elements, where \(V(x)\) was randomly generated in each element according to the Bernoulli distribution with probability 0.5, scaled by \(10^{6}\) and shifted up by \(10^{5}\).
The spikes in Figure 4.9 represent places in the training process where either the \(L_{\text{loc}}\) term is added to the loss function or the network's parameters are reset after successfully finding an eigenstate and preparing to search for another. We highlight that the regularisation terms are minimised almost instantly, shown by how sharp the spikes are in their plots over training time. This demonstrates that our scaling of the regularisation
\begin{table}
\begin{tabular}{||c|c||} \hline Nodes (\(n+1\)) & Eigenvalue Error (\(\%\)) \\ \hline \hline
100 & 0.783 \\
200 & 0.343 \\
300 & 0.0512 \\ \hline \end{tabular}
\end{table}
Table 2: The relative eigenvalue errors of the PINN-approximated first eigenenergy of Hamiltonian with a fixed Bernoulli distributed potential over \(m=20\) regions.
Figure 4.5: The first four PINN approximated eigenstates with the potential defined in Figure 4.4. Here, ev refers to the eigenvalue associated with the plotted eigenfunction.
terms is successfully restricting the network to a search space of solutions that satisfy these regularisation terms, before working to minimise \(L_{\text{de}}\).
### Comparison with SoftIGA
Figure 4.11 plots the PINN's eigenstate prediction compared to the SoftIGA calculation for the case with a potential plotted in 4.10. We can see the PINN's eigenfunction matching well the ones calculated by SoftIGA, and the eigenvalues each with percentage errors less than 0.5% from the SoftIGA eigenvalue (which is the same for all four of these eigenstates). However, Figure 4.12 shows a significant difference between the PINN and SoftIGA calculation of the fifth and sixth eigenstates for this potential. The PINN's eigenvalue predictions are within 2% of the SoftIGA calculations, however, the SoftIGA eigenfunctions are not localised to single regions. Instead, the eigenfunctions appear to be linear combinations of localised states, while the PINN's calculations are individual localised states. We believe that the individual localised states that make up the linear combination that SoftIGA finds, and the localised states that the PINN finds are all within the same eigenspace (have the same eigenvalue), and thus their linear combinations are also eigenstates with this eigenvalue. We believe this eigenspace is the span of all the localised states in the troughs of the potential that are one segment wide (for example, the intervals \([0,0.02]\), \([0.08,0.1]\), \([0.26,0.28]\), \([0.38,0.4]\), etc., in Figure 4.10). We also believe that the eigenstates in Figure 4.11 are all part of the eigenspace spanned
Figure 4.6: The eigenvalue prediction with respect to the epoch (training process) using the potential in Figure 4.4. The dotted lines represent the true eigenvalues.
by the localised states in the troughs of the potential that are two segments wide (\([0.16,0.2]\), \([0.48,0.52]\), \([0.78,0.82]\), \([0.94,0.98]\)). Since our PINN works by scanning through each segment to specifically detect a localised state, the network does not pick up linear combinations of separate localised states, such as the SoftIGA calculations in Figure 4.12. Thus, our network is capable of separating these eigenspaces into their localised basis elements, while SoftIGA is not, displaying an area where PINNs can outperform traditional computational methods.
### An Example in Two Dimensions
In 2D, we consider a special case where the potential can be decomposed as \(V(x,y)=V_{1}(x)+V_{2}(y)\) in a rectangular domain \(\Omega=[0,1]^{2}\). The Schrodinger operator can be rewritten as
\[-\Delta+V(x,y)=(-\partial_{xx}+V_{1}(x))+(-\partial_{yy}+V_{2}(y)),\]
which allows the decomposition in the weak solution, leading to an equivalence of solving two 1D eigenvalue problems. In particular, in the finite and isogeometric element setting with tensor-product meshes, the 2D matrix eigenvalue problem can be also decomposed into tensor-product structures, leading to 1D solvers. In this setting, a base function in 2D can be rewritten as a product, i.e., \(\phi(x,y)=\phi_{1}(x)\phi_{2}(y)\). Following the derivations in [3; 5], for the eigenvalue problem (2.8) in 2D, let \(W_{h}=\text{span}\{\phi_{1,j}(x)\phi_{2,k}(y)\}\). Then, the bilinear forms can be decomposed as
Figure 4.7: The potential \(V(x)\) with \(m=40\) uniform elements, where \(V(x)\) was randomly generated in each element according to the Bernoulli distribution with probability 0.5, scaled by \(10^{6}\) and shifted up by \(10^{5}\).
\[a(\phi_{1,k}(x)\phi_{2,k}(y),\phi_{1,j}(x)\phi_{2,j}(y)) =a_{x}(\phi_{1,k}(x),\phi_{1,j}(x))\cdot b_{y}(\phi_{2,k}(y),\phi_{ 2,j}(y)) \tag{4.18}\] \[\quad+b_{x}(\phi_{1,k}(x),\phi_{1,j}(x))\cdot a_{y}(\phi_{2,k}(y),\phi_{2,j}(y)),\] \[b(\phi_{1,k}(x)\phi_{2,k}(y),\phi_{1,j}(x)\phi_{2,j}(y)) =b_{x}(\phi_{1,k}(x),\phi_{1,j}(x))\cdot b_{y}(\phi_{2,k}(y),\phi_ {2,j}(y)),\]
where the 1D bilinear forms are defined as
\[a_{x}(\phi_{1,k}(x),\phi_{1,j}(x)) =\int_{0}^{1}\left(\frac{\mathsf{d}}{\mathsf{d}x}\phi_{1,k}(x) \frac{\mathsf{d}}{\mathsf{d}x}\phi_{1,j}(x)+V_{1}(x)\phi_{1,k}(x)\phi_{1,j}(x) \right)\mathsf{d}x, \tag{4.19}\] \[a_{y}(\phi_{2,k}(y),\phi_{2,j}(y)) =\int_{0}^{1}\left(\frac{\mathsf{d}}{\mathsf{d}y}\phi_{2,k}(y) \frac{\mathsf{d}}{\mathsf{d}y}\phi_{2,j}(y)+V_{2}(x)\phi_{2,k}(y)\phi_{2,j}(y) \right)\mathsf{d}y,\] \[b_{x}(\phi_{1,k}(x),\phi_{1,j}(x)) =\int_{0}^{1}\phi_{1,k}(x)\phi_{1,j}(x)\mathsf{d}x,\] \[b_{y}(\phi_{2,k}(y),\phi_{2,j}(y)) =\int_{0}^{1}\phi_{2,k}(y)\phi_{2,j}(y)\mathsf{d}y.\]
This is also referred to as variational separability. With this in mind, we can rewrite the matrix eigenvalue
Figure 4.8: The first four PINN approximated eigenstates with the potential defined in Figure 4.7.
problem (2.9) in 2D as a problem with 1D tensor-products:
\[(K_{x}\otimes M_{y}+M_{x}\otimes K_{y})U=E_{h}(M_{x}\otimes M_{y})U, \tag{4.20}\]
which allows
\[\big{(}(K_{x}-E_{x}M_{x})\otimes M_{y}+M_{x}\otimes(K_{y}-E_{y}M_{y})\big{)}U=0, \tag{4.21}\]
where \(E_{h}=E_{h,x}+E_{h,y}\). As the dimensions are separated, this leads to two 1D eigenvalue problems (see [2; 5] for more details).
\[\begin{array}{l}K_{x}U_{x}=E_{x}M_{x}U_{x},\\ K_{y}U_{y}=E_{y}M_{y}U_{y}.\end{array} \tag{4.22}\]
With this insight of dimension decomposition in mind, we apply the PINN in \(x\) and \(y\) dimensions to find eigenstates for the 2D problem. The eigenvalues in 2D are the sum of the 1D eigenvalues while the eigen
Figure 4.9: The values of the individual terms in the loss function as the training process evolves.
states in 2D are the tensor-products of the 1D eigenstates. This significantly reduces the computational costs. As an example, Figure 4.13 shows the surface plot of one of these two-dimensional potentials, with the PINN eigenstates displayed in Figure 4.14. When the two 1D potentials admit localised eigenstates, the eigenstates of the constructed 2D potential are also localised, as \(u(x,y)=u_{1}(x)u_{2}(y)\) is only nonzero on the small rectangular region where both \(u_{1}(x)\) and \(u_{2}(y)\) are nonzero. This is demonstrated by Figure 4.14, showing the eigenstates localised to the rectangular regions.
## 5 Concluding Remarks
Recently, there has been a growing interest in using machine learning and neural networks to solve differential equations, particularly in the study of Hamiltonians. In this paper, we extend current models by constructing a neural network capable of distinguishing eigenstates from Hamiltonians with randomly distributed potentials that lead to localisation in the states. Our approach involves incorporating a normalisation loss term to avoid trivial solutions and an orthogonality loss term to search for higher eigenstates. Our method eliminates the need for driving terms, which can hinder a network's ability to converge to an admissible solution. The novelty of our approach lies in a new localisation loss term, which encourages the network to converge to a solution that is localised to a specific region. This idea is based on the physical fact that localised eigenstates occur in randomly distributed potentials. We utilise the network's initial at
Figure 4.10: The potential \(V(x)\) with \(m=40\) uniform elements using \(n=200\) nodes, where \(V(x)\) was randomly generated in each element according to the Bernoulli distribution with probability 0.5, scaled by \(10^{6}\) and shifted up by \(10^{5}\).
tempt at convergence to identify regions of localised states before adding in this term to achieve the desired approximation. We demonstrate the effectiveness of our approach in discovering eigenstates for potentials generated randomly according to different distributions, with the highlight being the Bernoulli distribution, which leads to eigenstates with identical eigenvalue. Our model successfully predicts these eigenstates with good accuracy, overcoming a challenge faced by current models.
There are several potential avenues for future work. Some potential future directions in this research area include: (1) exploring the use of more complex loss functions that can capture additional physical constraints, such as symmetries or conservation laws; (2) investigating the use of different network architectures, such as convolutional neural networks or attention-based models, to improve accuracy and scalability; (3) generalising the approach to higher-dimensional systems or non-linear differential equations; and (4) investigating the performance of the approach with larger and more complex systems, including systems with
Figure 4.11: The first four PINN predicted eigenstates compared to the SoftIGA calculated states with the potential defined in Figure 4.10. Here, ev refers to the eigenvalue calculated by the PINN for each state, with the SoftIGA calculated eigenvalue of 103679.96 for each of these four states.
a larger number of particles or more complex interactions. Overall, the use of ANNs to solve differential equations and approximate eigenstates of Hamiltonians has shown to be a promising approach, and there are many potential directions for future research in this area.
|
2309.02769 | Unifying over-smoothing and over-squashing in graph neural networks: A
physics informed approach and beyond | Graph Neural Networks (GNNs) have emerged as one of the leading approaches
for machine learning on graph-structured data. Despite their great success,
critical computational challenges such as over-smoothing, over-squashing, and
limited expressive power continue to impact the performance of GNNs. In this
study, inspired from the time-reversal principle commonly utilized in classical
and quantum physics, we reverse the time direction of the graph heat equation.
The resulted reversing process yields a class of high pass filtering functions
that enhance the sharpness of graph node features. Leveraging this concept, we
introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse
filtering functions' effects on node features. To explore more flexible
filtering conditions, we further generalize MHKG into a model termed G-MHKG and
thoroughly show the roles of each element in controlling over-smoothing,
over-squashing and expressive power. Notably, we illustrate that all
aforementioned issues can be characterized and analyzed via the properties of
the filtering functions, and uncover a trade-off between over-smoothing and
over-squashing: enhancing node feature sharpness will make model suffer more
from over-squashing, and vice versa. Furthermore, we manipulate the time again
to show how G-MHKG can handle both two issues under mild conditions. Our
conclusive experiments highlight the effectiveness of proposed models. It
surpasses several GNN baseline models in performance across graph datasets
characterized by both homophily and heterophily. | Zhiqi Shao, Dai Shi, Andi Han, Yi Guo, Qibin Zhao, Junbin Gao | 2023-09-06T06:22:18Z | http://arxiv.org/abs/2309.02769v2 | Unifying over-smoothing and over-squashing in graph neural networks: A physics informed approach and beyond
###### Abstract
Graph Neural Networks (GNNs) have emerged as one of the leading approaches for machine learning on graph-structured data. Despite their great success, critical computational challenges such as over-smoothing, over-squashing, and limited expressive power continue to impact the performance of GNNs. In this study, inspired from the time-reversal principle commonly utilized in classical and quantum physics, we reverse the time direction of the graph heat equation. The resulted reversing process yields a class of high pass filtering functions that enhance the sharpness of graph node features. Leveraging this concept, we introduce the Multi-Scaled Heat Kernel based GNN (MHKG) by amalgamating diverse filtering functions' effects on node features. To explore more flexible filtering conditions, we further generalize MHKG into a model termed G-MHKG and thoroughly show the roles of each element in controlling over-smoothing, over-squashing and expressive power. Notably, we illustrate that all aforementioned issues can be characterized and analyzed via the properties of the filtering functions, and uncover a trade-off between over-smoothing and over-squashing: enhancing node feature sharpness will make model suffer more from over-squashing, and vice versa. Furthermore, we manipulate the time again to show how G-MHKG can handle both two issues under mild conditions. Our conclusive experiments highlight the effectiveness of proposed models. It surpasses several GNN baseline models in performance across graph datasets characterized by both homophily and heterophily.
## 1 Introduction
Graph Neural Networks (GNNs) have demonstrated exceptional performance in learning the representations of graph-structured data [19, 38]. Within the diverse research avenues of GNNs, one prominent aspect is to interpret GNNs via different perspectives, such as gradient flow [11, 17], neural diffusion [31, 6], and testing of graph isomorphism [40], all of which contribute to a deeper comprehension of GNNs' underlying working mechanism. Concurrently, another trajectory in GNN's development is oriented toward addressing specific challenges. Generally, three inherent limitations in GNNs have been identified: over-smoothing [4], over-squashing [33], and limitations in expressive power [40], each of them is openly acknowledged and arises due to various aspects of GNNs and its graph data input. As such, research efforts are primarily focused on tackling these aspects to effectively address the mentioned problems. For instance, to counteract the over-smoothing phenomenon, researchers have advocated for the augmentation of feature variation by incorporating
a source term [31] or modulating the energy regularization to prevent rapid diminishment [26, 14]. To mitigate over-squashing, strategies may encompass exploring graph topology through methods such as _graph re-weighting_[27] and _graph rewiring_[33]. Lastly, enhancing the capacity to discern non-isomorphic graphs could be achieved by assigning more intricate embeddings to graph nodes [40, 36].
While many efforts have been made on these three issues, little research has been done to consider these issues in a unified manner to reveal their underlying relationships and further seeking for mitigating them via one specific GNN and an unified way. The main difficulties for this situation may be due to
* Some GNNs lack the necessary flexibility to serve as the subjects for investigating the aforementioned triad of issues in a cohesive manner.
* The pursuit of greater flexibility in GNNs in general leads to increased model complexity. Consequently, effectively exploring the roles on GNNs components on these issues becomes a formidable task.
* There's a gap in research regarding the transition between addressing over-smoothing, typically focusing on node features, and tackling over-squashing and enhancing expressive power, which often concentrate on graph topology.
In the present study, we derive inspiration from the physical reality of graph heat equation, leading to a novel perspective that examines the propagation of node features in the reverse direction of time. We demonstrate that many prevailing GNNs can invert the feature propagation process, transitioning from smoothing to sharpening effects, and vice versa. Accordingly, we propose a Multi-Scale Heat Kernel GNN (MHKG) that propagates node features via the balance between smoothing and sharpening effects induced by the low and high pass spectral filtering functions generated from the heat and reverse heat kernels. Furthermore, we generalize the MHKG into a more flexible model, referred to as G-MHKG, and provide a comprehensive examination of the components within G-MHKG that regulate over-smoothing, over-squashing and expressive power. Notably, by utilizing G-MHKG as an analytical tool and examining the properties of the associated filtering functions, we uncover a trade-off between over-squashing and over-smoothing in the graph spectral domain. This relationship is revealed under mild conditions on the filtering functions, offering fundamental insights between these issues. Lastly, we manipulate the time again in G-MHKG to show that its capability of sufficiently handling both issues for heterophily graphs and it is impossible to achieve the same goal for homophily graphs.
Contribution and OutlineOur main goal is to illustrate and verify the underlying relationship between aforementioned issues via our proposed model inspired by the time reversal principle from physics. In addition, we aim to show how our proposed model can handle these issues in a unified manner. In Section 3, we explore the link between graph filtering functions and solutions of a class of ordinary differential equations (ODEs) on graph node features. This connection propels the introduction of two multi-scale heat kernel based models: MHKG and the generalized MHKG (G-MHKG). In Section 4 we show how the filtering matrices in our model control the energy dynamics of node features, thereby effectively addressing the over-smoothing challenge. In Section 5, we demonstrate the weight matrix in G-MHKG determines the model's expressive power and over-squashing. More importantly, in Section 6 and 7 we show there is trade-off between over-smoothing
and over-squashing. We also prove that it is impossible to sufficiently handle both two issues for homophily graphs and for heterophily graphs G-MHKG can handle these issues by a simple manipulation of time. We verify our theoretical claims via comprehensive empirical studies in Section 8.
## 2 Preliminaries
Graph basics and GNNsTo begin, we let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) with nodes set \(\mathcal{V}=\{v_{1},v_{2},\cdots,v_{N}\}\) of total \(N\) nodes and edge set \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\). We also denote the graph adjacency matrix as \(\mathbf{A}\in\mathbb{R}^{N\times N}\). In addition, we consider the symmetric normalized adjacency matrix as \(\widehat{\mathbf{A}}=\mathbf{D}^{-1/2}(\mathbf{A}+\mathbf{I})\mathbf{D}^{-1/2}\) where \(\mathbf{D}\) is the diagonal degree matrix, with the \(i\)-th diagonal entry given by \(d_{i}=\sum_{j}a_{ij}\), the degree of node \(i\). The normalized graph Laplacian is given by \(\widehat{\mathbf{L}}=\mathbf{I}-\widehat{\mathbf{A}}\). We further let \(\rho_{\widehat{\mathbf{L}}}\) denote the largest eigenvalue (also called the highest frequency) of \(\widehat{\mathbf{L}}\). In terms of GNNs, we note that in general there are two types of GNNs, the spatial-based GNNs such as graph convolution network (GCN) [20] defines the layer-wise propagation rule via the normalized adjacency matrix as
\[\mathbf{H}^{(t)}=\sigma\big{(}\widehat{\mathbf{A}}\mathbf{H}^{(t-1)}\mathbf{W }^{(t)}\big{)}, \tag{1}\]
where we let \(\mathbf{H}^{(t)}\) as the feature matrix at layer \(t\) with \(\mathbf{H}^{(0)}=\mathbf{X}\in\mathbb{R}^{N\times c}\), the input feature matrix, and \(\mathbf{W}^{(t)}\) is the learnable weight matrix performing channel mixing. On the other hand, spectral GNNs such as ChebyNet [10] perform spectral filtering on the spectral domain of the graph as
\[\mathbf{H}^{(t)}=\sigma\left(\mathbf{U}g_{\theta}(\mathbf{\Lambda})\mathbf{U}^ {\top}\mathbf{H}^{(0)}\right), \tag{2}\]
where \(g_{\theta}(\mathbf{\Lambda})\) serves as the filtering function on normalized Laplacian, which utilizes an eigendecomposition \(\widehat{\mathbf{L}}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\top}\) and \(\mathbf{U}^{\top}\mathbf{h}\) is known as the Fourier transform of a graph signal \(\mathbf{h}\in\mathbb{R}^{N}\). In this paper, we let \(\{(\lambda_{i},\mathbf{u}_{i})\}_{i=1}^{N}\) be the set of eigenvalue and eigenvector pairs of \(\widehat{\mathbf{L}}\) where \(\mathbf{u}_{i}\) are the column vectors of \(\mathbf{U}\).
Graph heat kernelGeneralizing from the so-called heat equation defined on a manifold, one can define graph heat equation as \(\frac{\partial\mathbf{H}^{(t)}}{\partial t}=-\widehat{\mathbf{L}}\mathbf{H}^{ (t)}\), in which \(\mathbf{H}^{(t)}\) is the feature representation at a specific iteration time \(t\). The solution of the graph heat equation, denoted as \(\mathbf{H}^{(t)}\), is given by \(\mathbf{H}^{(t)}=\mathrm{e}^{-t\widehat{\mathbf{L}}}\mathbf{H}^{(0)}\) with \(\mathbf{H}^{(0)}=\mathbf{X}\) as the initial condition. It is well-known that the application of Euler discretization leads to the propagation of the linear GCN models [20, 37] and this process is with the name of Laplacian smoothing [8]. The characteristic of the solution is indeed governed by the so-called Heat Kernel, denoted by \(\mathbf{K}_{t}=\mathrm{e}^{-t\widehat{\mathbf{L}}}\). The heat kernel defines a continuous-time random walk, and defines a semi-group, i.e., \(\mathbf{H}^{(t+s)}=\mathbf{H}^{(t)}\cdot\mathbf{H}^{(s)}\) for any \(t,s\geq 0\), and \(\lim_{t\rightarrow\infty}\mathbf{H}^{(t)}=\mathbf{I}\)[8]. This fact indicates non-distinguishment for the nodes with same degrees, known as over-smoothing. We included a more detailed discussion on both heat operator and heat kernel in Appendix A.1
## 3 Turning Smoothing to Sharpening
We now delve deeper into the graph heat kernel \(\mathbf{K}_{t}\). At a specific time \(t\), if one considers \(t\) is flowed at integer interval, then \(\mathbf{H}^{(t)}=\mathrm{e}^{-t\widehat{\mathbf{L}}}\mathbf{H}^{(0)}\) can be interpreted as a linear diffusion (i.e., \(\mathrm{e}^{-\widehat{\mathbf{L}}}\)) on \(\mathbf{H}^{(0)}\) repeated for \(t\) times. Consequently, at each step, one can treat \(\mathrm{e}^{-\widehat{\mathbf{L}}}\) as a low pass filtering function (_monotonically decreasing_) on the graph spectra. More generally, one can assign one function \(f\) onto \(\widehat{\mathbf{L}}\) to build a class of ODEs with the form of \(\frac{\partial\mathbf{H}^{(t)}}{\partial t}=-f(\widehat{\mathbf{L}})\mathbf{H} ^{(t)}\) and, similar to the basic heat equation, we can consider the generalized heat kernel \(\widehat{\mathbf{K}}_{t}=\mathrm{e}^{-tf(\widehat{\mathbf{L}})}\). We note that in the sequel, \(f(\widehat{\mathbf{L}})\) serve as a filtering function (i.e., polynomial or analytic functions) acting element-wisely on the eigenvalues of \(\widehat{\mathbf{L}}\), that is \(f(\widehat{\mathbf{L}})=\mathbf{U}f(\mathbf{\Lambda})\mathbf{U}^{\top}\).
A physical interpretation of the aforementioned class of ODEs suggests that heat flows in a constant direction and speed. This aligns with the _second law of thermodynamics_, indicating an inevitable evolution of the heat distribution in an isolated system, leading to what is known as _thermal equilibrium_. In analogy to GNNs, such phenomenon indicates all node features eventually become equal to each other, typically known as the over-smoothing issue. Together with the recent challenge on fitting GNNs to the so-called heterophily graphs, GNNs are preferred to produce a mixed dynamic, involving not only smoothing but also sharpening the features of connected nodes with different labels. Therefore, **we wish to turn the diffusion into a reverse direction** which is equivalent to providing external (anti)-force to the system and expect to achieve a fusion of heat in certain areas of the system while allowing diffusion in others during each evolution step. Since for any given \(f\) on \(\widehat{\mathbf{L}}\), one can induce another system that produces reverse process by assigning a negative sign before \(f\). Accordingly, one can obtain a reverse filtering effect once the diffusion direction is turned. Additionally, if we further require the filtering process \(\mathcal{F}:\mathbb{R}^{N\times c}\rightarrow\mathbb{R}^{N\times c}\) such that \(\mathcal{F}(\mathbf{H})=\mathrm{e}^{-tf(\widehat{\mathbf{L}})}\mathbf{H}^{(0)}\) to be bijective 1, then by replacing \(t\) to the \(-t\), one can recover \(\mathbf{H}^{(0)}\) from \(\mathbf{H}^{(t)}\). We note that, in this case, the propagation and recovery of \(\mathbf{H}^{(0)}\) is aligned with those physical variables (i.e., electric potential, energy density of the electromagnetic field) that are unchanged through inverse time operators in both classic and quantum mechanics. Figure. 1 shows how graph diffusion smooths the node feature and reverse diffusion makes features more distinct. With this understanding, we next show how the novel heat kernel GCN is constructed.
Footnote 1: Other requirements such as homeomorphism or isomorphism can also be applied.
### Multi-scale Heat Kernel GCN (Mhkg)
Following by the idea that at each discrete time step, i.e., from \(\ell-1\) to \(\ell\), the heat kernel based GCN model shall possess mixed dynamic (smoothing & sharpening) on node features. We thus propose the multi-scale heat kernel GCN (MHKG) as:
\[\mathbf{H}^{(\ell)}=\mathbf{U}\mathrm{diag}(\theta_{1})\mathbf{\Lambda}_{1} \mathbf{U}^{\top}\mathbf{H}^{(\ell-1)}\mathbf{W}^{(\ell-1)}+\mathbf{U} \mathrm{diag}(\theta_{2})\mathbf{\Lambda}_{2}\mathbf{U}^{\top}\mathbf{H}^{( \ell-1)}\mathbf{W}^{(\ell-1)}, \tag{3}\]
in which \(\mathrm{diag}(\theta_{1})\) and \(\mathrm{diag}(\theta_{2})\) are learnable filtering matrices with \(\mathbf{\Lambda}_{1}=-f(\mathbf{\Lambda})=\mathrm{diag}(\{\mathrm{e}^{-f( \lambda_{i})}\}_{i=1}^{N})\) and \(\mathbf{\Lambda}_{2}=f(\mathbf{\Lambda})=\mathrm{diag}(\{\mathrm{e}^{f( \lambda_{i})}\}_{i=1}^{N})\). Referring to the previous note on \(f\), one can denote \(f_{\theta}(\widehat{\mathbf{L}})=\mathbf{U}\mathrm{diag}(\theta_{1})\mathbf{U} ^{\top}\mathbf{H}^{(\ell-1)}\).
Figure 1: Top: The evolution of the node feature of diffusion (smoothing) process (i.e., from distinct features to over-smoothing). Bottom: the reverse diffusion (sharpening) process (i.e., from nearly identical to distinct node features).
\(\mathbf{U}\mathrm{diag}(\theta)f(\mathbf{\Lambda})\mathbf{U}^{\top}\).
Beyond time reversal: an even more general caseIn light of the construction of MHKG, one can consider a more generalized model with more flexible choice of dynamics as:
\[\frac{\partial\mathbf{H}^{(t)}}{\partial t}=f(\widehat{\mathbf{L}})\mathbf{H}^{( t)},\quad\frac{\partial\mathbf{H}^{(t)}}{\partial t}=g(\widehat{\mathbf{L}}) \mathbf{H}^{(t)}, \tag{4}\]
We note that for simplicity reasons, in this paper we only consider two fixed dynamic on \(\mathbf{H}\) and the conclusion we provided can be easily generalized to the multiple dynamic cases. With simple discretization, the corresponding GNN induced from Eq. (4) is:
\[\mathbf{H}^{(\ell)} =f_{\theta_{1}}(\widehat{\mathbf{L}})\mathbf{H}^{(\ell-1)} \mathbf{W}^{(\ell-1)}+g_{\theta_{2}}(\widehat{\mathbf{L}})\mathbf{H}^{(\ell- 1)}\mathbf{W}^{(\ell-1)}\] \[=\mathbf{U}\mathrm{diag}(\theta_{1})\mathrm{e}^{f(\mathbf{ \Lambda})}\mathbf{U}^{\top}\mathbf{H}^{(\ell-1)}\mathbf{W}^{(\ell-1)}+ \mathbf{U}\mathrm{diag}(\theta_{2})\mathrm{e}^{g(\mathbf{\Lambda})}\mathbf{U} ^{\top}\mathbf{H}^{(\ell-1)}\mathbf{W}^{(\ell-1)}. \tag{5}\]
We named this generalized model as G-MHKG. It is not difficult to verify that G-MHKG is a general form of various GNN models. The simplest case one can obtain is to set \(\theta_{2}=\mathbf{0}_{N}\) and \(\mathrm{e}^{f(\mathbf{\Lambda})}=\mathrm{e}^{-\xi\mathbf{\Lambda}}\), where \(\xi\in\mathbb{R}\) is a constant or learnable coefficient, we recover the GraphHeat proposed in [39]. If we let \(g(\mathbf{\Lambda})=\mathbf{0}_{N\times N}\), and \(\theta_{2}=\mathbf{c}_{N}\), where \(\mathbf{c}_{N}\) as an \(N\)-dimensional vector with arbitrary constant as \(c\), then the model is equivalent to GRAND++ [32] with a layer dependent source term, see Appendix B for more detailed illustrations. In the following sections, we show the roles of \(\theta\) and \(\mathbf{W}\) in controlling the three aforementioned issues of GNNs.
## 4 Filtering matrices control model dynamics & over-smoothing
In this section, we illustrate the role of the filtering matrices, i.e., \(\mathrm{diag}(\theta_{1})\) and \(\mathrm{diag}(\theta_{2})\), in G-MHKG control model's energy dynamic and how a controllable energy dynamic affects model's adaption on homophily and heterophily graphs. For the simplicity of the analysis we further assume that \(f(\cdot)\) and \(g(\cdot)\) to be _monotonically decrease/increase_ in the spectral domain of the graph so that they can be considered as low/high pass filtering functions. We note that this assumption is aligned with the settings in many recent works such as [44, 23, 22] and the conclusion we represent can be easily applied to MHKG which is a special version of G-MHGK. To quantify model's energy dynamics, we consider the Dirichlet energy of the node features \(\mathbf{H}\), defined by \(\mathbf{E}(\mathbf{H})=\mathrm{Tr}(\mathbf{H}^{\top}\widehat{\mathbf{L}} \mathbf{H})\). It is well-known that Dirichlet energy becomes \(0\) when the model encounters over-smoothing issue. To represent such asymptotic energy behavior, [11, 17, 29] consider a general dynamic as \(\mathbf{\dot{H}}^{(t)}=\mathrm{G}NN_{\theta}(\mathbf{H}^{(t)},t)\), with \(\mathrm{G}NN_{\theta}(\cdot)\) as an arbitrary GNN function characterising its behavior by low/high-frequency-dominance (L/HFD).
**Definition 1** ([11]).: \(\mathbf{\dot{H}}^{(t)}=\mathrm{G}NN_{\theta}(\mathbf{H}^{(t)},t)\) is Low-Frequency-Dominant (LFD) if \(\mathbf{E}\big{(}\mathbf{H}^{(t)}/\|\mathbf{H}^{(t)}\|\big{)}\to 0\) as \(t\to\infty\), and is High-Frequency-Dominant (HFD) if \(\mathbf{E}\big{(}\mathbf{H}^{(t)}/\|\mathbf{H}^{(t)}\|\big{)}\to\rho_{\widehat{ \mathbf{L}}}/2\) as \(t\to\infty\), where \(\rho_{\widehat{\mathbf{L}}}\) stands for the largest eigenvalue of \(\widehat{\mathbf{L}}\).
**Lemma 1** ([11]).: _A GNN model is LFD (resp. HFD) if and only if for each \(t_{j}\to\infty\), there exists a sub-sequence indexed by \(t_{j_{\kappa}}\to\infty\) and \(\mathbf{H}_{\infty}\) such that \(\mathbf{H}_{j_{\kappa}}^{(t)}/\|\mathbf{H}_{j_{\kappa}}^{(t)}\|\to\mathbf{H}_{\infty}\) and \(\widehat{\mathbf{L}}\mathbf{H}_{\infty}=0\) (resp. \(\widehat{\mathbf{L}}\mathbf{H}_{\infty}=\rho_{\widehat{\mathbf{L}}}\mathbf{H}_{ \infty}\))._
**Remark 1** (Dirichlet energy, graph homophily and heterophily).: It has been shown [3] that if the graph is heterophily where the connected node are unlikely to share the same labels, one may prefer a GNN with sharpening effect, corresponding to increase of energy. Whereas, when the graph is highly homophily, a smoothing effect is preferred. Based on the settings above, we show our conclusion on the property of energy dynamic of G-MHKG in the following.
**Theorem 1**.: _G-MHKG can induce both LFD and HFD dynamics. Specifically, let \(\theta_{1}=\mathbf{1}_{N}\) and \(\theta_{2}=\zeta\mathbf{1}_{N}\) with a positive constant \(\zeta\). Then, with sufficient large \(\zeta\) (\(\zeta>1\)) so that \(\mathrm{e}^{f(\mathbf{A})}+\zeta\mathrm{e}^{g(\mathbf{A})}\) is monotonically increasing on spectral domain, G-MHKG is HFD. Similarly, if \(0<\zeta<1\) and is sufficient small such that \(\mathrm{e}^{f(\mathbf{A})}+\zeta\mathrm{e}^{g(\mathbf{A})}\) is monotonically decreasing, the model is LFD._
We leave the proof in Appendix C.1. Theorem 1 directly shows the benefits of constructing multi-scale GNNs because such a model provides flexible control of the dominant dynamic that always amplifies/shrinks the node feature differences at every step of its propagation. Accordingly, once the model is HFD, there will be no over-smoothing issue. Furthermore, it is straightforward to verify that single-scale GNN [20, 5] may satisfy the conditions of frequency dominance described in Theorem 1, and in fact some of them can never be HFD to fit heterophily graphs. See [11] for more details. Finally, we note that there are many other settings of \(\theta_{1}\) and \(\theta_{2}\) that make the model HFD, the example in Theorem 1 is one of them to illustrate its existence.
**Remark 2**.: Recent research on GNNs dynamics [11] suggests that to induce HFD, the weight matrix \(\mathbf{W}\) must be symmetric and contain at least one negative eigenvalue. However, from Theorem 1, G-MHKG can be L/HFD without considering the impact of \(\mathbf{W}\). This further supports the flexibility and generalizability of G-MHKG. However, as we will illustrate in the next section, \(\mathbf{W}\) has direct impact on the model's expressive power and over-squashing.
## 5 Weights Control Expressive Power & Over-squashing
We have demonstrated in Section 4 that the L/HFD induced by G-MHKG does not impose any specific conditions on the weight matrix \(\mathbf{W}\). However, it is intriguing to investigate the functionality of \(\mathbf{W}\) in G-MHKG. Recent developments in [12] shed light on the fact that \(\mathbf{W}\) significantly influences the model's expressive power which refers to the class of functions that GNN can learn, taking node features into consideration. This influence becomes evident through the pairwise (over) squashing phenomenon, which interacts with the number of layers denoted as \(\ell\), i.e. the depth of the model. Specifically, let \(\mathcal{Q}:\mathbb{R}^{N\times c}\rightarrow\mathbb{R}^{N\times d}\) be the function that GNNs aim to learn, where \(d\) is the dimension of the node features after certain layers of propagation. One can characterize the **expressive power** of one GNN model by estimating the amount of mixing of \(\mathcal{Q}(\mathbf{X})\) among any pair of nodes by the following definition.
**Definition 2** (Maximal Mixing).: For a smooth function \(\mathcal{Q}\) of \(N\times c\) variables, the maximal mixing induced from \(\mathcal{Q}\) associated with nodes \(u,v\) can be measured as:
\[\mathrm{mix}_{\mathcal{Q}}(v,u)=\max_{\mathbf{X}}\left\|\frac{\partial( \mathcal{Q}(\mathbf{X}))_{v}}{\partial\mathbf{x}_{u}}\right\|, \tag{6}\]
where \(\|\cdot\|\) is the spectral norm of the matrix. Here the \(\mathrm{mix}_{\mathcal{Q}}(u,v)\in\mathbb{R}\) depends on the maximal component of Jacobian matrix between the output of GNN and the initial node features.
Furthermore if one replaces the \(\mathcal{Q}(\mathbf{X})\) in Eq. (6) by the feature representation from an \(\ell\)-layer GNN, the quantity presented in Eq. (6) aligns with the so-called _sensitivity_ which was proposed to measure the over-squashing issue [33]. Therefore, it is natural to see that if **one GNN has strong mixing (expressive) power, the over-squashing issue in this GNN tends to be smaller**. We note that similar definitions and discussions can be found in [12]. Accordingly, we define **over-squashing** (OSQ) as
\[\text{OSQ}_{v,u}=\big{(}\text{mix}_{\mathcal{Q}}(v,u)\big{)}^{-1}. \tag{7}\]
Now we present the upper bound of \(\text{mix}_{\mathcal{Q}}(v,u)\) for G-MHKG, also standing for the upper bound of its expressive power. Specifically, the upper bound of \(\text{mix}_{\mathcal{Q}}(v,u)\) is determined by \(\mathbf{W}\), depth \(\ell\) and \(\mathbf{S}=\sum\widehat{\mathbf{A}}^{l}+\widehat{\mathbf{A}}^{h}\), where \(\widehat{\mathbf{A}}^{l}=\mathbf{I}-\mathbf{U}\mathbf{\Lambda}_{1}\mathbf{U} ^{\top}\) and \(\widehat{\mathbf{A}}^{h}=\mathbf{I}-\mathbf{U}\mathbf{\Lambda}_{2}\mathbf{U} ^{\top}\) stand for the (weighted) adjacency matrices generated from the low pass and high pass filtering functions, respectively. We note that for the sake of simplicity, we set both \(\theta_{1}=\theta_{2}=\mathbf{1}_{N}\) in Eq. (5).
**Lemma 2**.: _Let \(\mathcal{Q}(\mathbf{X})=\mathbf{H}^{(\ell)}\), \(\|\mathbf{W}^{(\ell-1)}\|\leq\mathrm{w}\) and \(\mathbf{S}=\widehat{\mathbf{A}}^{l}+\widehat{\mathbf{A}}^{h}\). Given \(u,v\in\mathcal{V}\), with \(\ell\) layers, then the following holds:_
\[\left\|\frac{\partial\mathbf{h}_{v}^{(\ell)}}{\partial\mathbf{x}_{u}}\right\| \leq\mathrm{w}^{\ell}\left(\mathbf{S}\right)_{v,u}^{\ell}, \tag{8}\]
_where \(\|\cdot\|\) is the spectral norm of the matrix._
We present the proof in Appendix C.2. The conclusion in Lemma 2 indicates that the incorporation of W serves a dual purpose: it not only plays a role in defining the model's expressive capabilities but also helps us understand the connection between the model's expressiveness and the concern of over-squashing.
**Remark 3** (HFD, over-smoothing and over-squashing).: Based on the discussion in Section 4, to adapt G-MHKG to HFD dynamic, one shall require at least one of \(f(\cdot)\) and \(g(\cdot)\) to be _monotonically increasing_ and ensure that G-MHKG always amplifies the output induced from such high pass filter determined domain (i.e., Theorem 1). Similarly, if one wants to decrease the over-squashing, it is sufficient to increase all entries of \(\mathbf{S}\), so that the upper bound of mixing power increases, resulting a potentially smaller over-squashing. We note that this observation supports that any _graph re-weighting_ that increases the quantity of \(\mathbf{S}\) and _graph rewiring_ that makes \(\mathbf{S}\) denser can mitigate the over-squashing issue. Therefore it is not difficult to verify that based on the form of \(\mathbf{S}\) in Lemma 2, to increase \(\mathbf{S}\), it is sufficient to require \((\mathbf{U}\text{diag}(\theta_{1})\mathrm{e}^{f(\mathbf{\Lambda})}+\text{ diag}(\theta_{2})\mathrm{e}^{g(\mathbf{\Lambda})}\mathbf{U}^{\top})_{i,j}<\widehat{ \mathbf{L}}_{i,j}\,\forall i,j\). This suggests that similar to the model dynamics, the over-squashing issue can also be investigated through the filtering functions in spectral domain.
## 6 Trade-off Between two issues in spectral domain
Building upon the findings from Section 4 and Section 5, this section explores a discernible trade-off between _over-smoothing_ and _over-squashing_. Assuming we have two G-MHKGs namely G-MHKG(1) and G-MHKG(2), with the same assumptions in Lemma 2, additionally let two models share the same weight matrix \(\mathbf{W}\). Then it is sufficient to have the following conclusion.
**Lemma 3** (Trade-off).: _For two G-MHKGs namely G-MHKG(1) and G-MHKG(2), if G-MHKG(1) has lower over-squashing than G-MHKG(2) i.e., \(\mathrm{e}^{f_{1}(\mathbf{\Lambda})}+\mathrm{e}^{g_{1}(\mathbf{\Lambda})}< \mathrm{e}^{f_{2}(\mathbf{\Lambda})}+\mathrm{e}^{g_{2}(\mathbf{\Lambda})}\), where \(f_{1},g_{1}\) and \(f_{2},g_{2}\) are filtering functions of two models, respectively, then on any iteration, i.e., from \(\mathbf{H}^{(\ell-1)}\) to \(\mathbf{H}^{(\ell)}\). With the same initial node feature \(\mathbf{H}^{(\ell-1)}\), we have the following inequality in terms of Dirichlet energy of two models_
\[\mathbf{E}(\mathbf{H}^{(\ell)})_{1}<\mathbf{E}(\mathbf{H}^{(\ell)})_{2}.\]
_In words, more feature smoothing effect is induced from lower over-squashing model G-MHKG(1)._
We include the proof in Appendix C.3. Our conclusion can be applied to single-scale GNNs. To gain a clear understanding of the trade-off described in Lemma 3, one can consider that since the sum of the filtering functions of G-MHKG(2) over G-MHKG(1) indicating a higher Dirichlet energy of the node features, and thus G-MHKG(2) produces less smoothing than G-MHKG(1). Meanwhile, the higher values of eigenvalues from G-MHKG(2) also increase model's over-squashing based on Lemma 2 and Remark 3. These observations directly suggest that imposing more sharpening effect in a model to prevent over-smoothing will lead to more over-squashing.
## 7 Time manipulation: How to handle two issues and beyond
While we have illustrated the fundamental relationship (trade-off) between over-smoothing and over-squashing, whether GNN models can handle both issues naturally becomes the next question. Specifically, one shall require a GNN to be HFD to avoid over-smoothing and increasing the quantities in \(\mathbf{S}\) to decrease over-squashing. In terms of G-MHKG, one can see that due to the property of exponential function, it is not possible to have the summation of the diagonal position of \(\mathrm{diag}(\theta_{1})\mathrm{e}^{f(\mathbf{\Lambda})}+\mathrm{diag}( \theta_{2})\mathrm{e}^{g(\mathbf{\Lambda})}=0\) unless we set \(\theta_{1}=\theta_{2}=0\) under the situation that \(\mathrm{e}^{f(\mathbf{\Lambda})}\neq\mathrm{e}^{g(\mathbf{\Lambda})}\), or \(\theta_{1}=-\theta_{2}\) when \(\mathrm{e}^{f(\mathbf{\Lambda})}=\mathrm{e}^{g(\mathbf{\Lambda})}\). Nonetheless, to satisfy the condition, we delay the occurrence of HFD by setting \(\theta_{1}=\theta_{2}=0\) when \(\lambda=0\), and our result is summarized in the following theorem.
**Theorem 2** (Delayed HFD (D-HFD)).: _G-MHKG is capable of handling both two issues with HFD dynamic and non-increasing over-squashing. Specifically, let \(k\) be the number of connected components of \(\mathcal{G}\) and \(\theta_{1}\), \(\theta_{2}\geq 0\) then both two issues can be sufficiently handled by setting \(\mathrm{diag}(\theta_{1})_{i,i}=\mathrm{diag}(\theta_{2})_{i,i}=0\)\(i\in[1,k]\) and \(\mathrm{diag}(\theta_{1})\mathrm{e}^{f(\mathbf{\Lambda}_{i},i)}+\mathrm{diag}( \theta_{2})\mathrm{e}^{g(\mathbf{\Lambda}_{i},i)}<\mathbf{\Lambda}_{i,i}\)\(i\in[k+1,N]\)._
We included the detailed proof in Appendix C.4 with additional discussions and clarifications. Importantly from Theorem 2, the reason that we consider zero eigenvalues of the graph is to sufficiently decrease the over-squashing, one shall require the filtered eigenvalues are not larger than the graph spectra. Therefore when eigenvalues are \(0\), the sufficient condition is to require the filtering results equal to \(0\) to maintain the over-squashing level. Figure. 2 shows the comparison between different kinds of filtering functions in regard to the effect on over-smoothing and over-squashing. In addition, although Theorem 2 show how a D-HFD model handle two issues, this conclusion is more applicable to heterophily graphs according to Remark 1. However, as we will show in the next theorem, it is not possible for the model to be LFD and decrease over-squashing.
**Theorem 3**.: _Suppose \(\theta_{1}\) and \(\theta_{2}\geq 0\), then it is impossible for the model to be LFD and decrease OSQ. In other words, there must exist at least one \(\mathbf{\Lambda}^{*}\subseteq\mathbf{\Lambda}\) such that \(\mathrm{diag}(\theta_{1})\mathrm{e}^{f(\mathbf{\Lambda}^{*})}+\mathrm{diag}( \theta_{2})\mathrm{e}^{g(\mathbf{\Lambda}^{*})}\) is (1): monotonically increasing (HFD); (2): with constant quantity (thus not LFD); (3): monotonically decreasing with image greater than \(\mathbf{\Lambda}\) thus OSQ is increased._
The proof of theorem is in Appendix C.5. Figure. 2 (right) illustrates the situation in Theorem 3. The model with the dynamic shown in Figure. 2 (right) (i.e., first \(k\) components being 0 + dashed HFD + LFD) is not a LFD and in fact asymptotically dominated by the HFD part, thereby according to Definition 1 and Lemma 1, the model is not L/HFD.
Relation to existing worksCompared to the existing work [15] that claimed both issues can not be alleviated simultaneously, our conclusion shows that once the model is HFD, it is possible to handle both issues. This suggests the effectiveness for incorporating multi-scale GNNs in terms of investigating the over-smoothing issue via model dynamics. Furthermore, although similar motivation for inducing sharpening effect to the node features was explored [7], its effect was only verified empirically. Our analysis under the scope of dominant dynamic and maximal mixing of the function learned from GNNs pave the path of evaluating well-known GNN issues in a unified platform. Moreover, recent studies also attempted to mitigate both issues via graph surgery by dropping highly positive cured edges which often lead to over-smoothing issue [15] and rewiring the communities connected with very negatively curved edges which is responsible for the over-squashing issue [33]. Although the relationship between the graph edge curvature and spectra was explored [2], a detailed comparison between spatial (curvature based surgery) and spectral method is still wanted, especially on how the spatial surgery affects the eigen-distribution of the graph spectra [27].
## 8 Experiment
In this section, we show a variety of numerical tests for MHKG and G-MHKG. Specifically, Section 8.1 verifies how a controlled model dynamic enhances model's adaption power on homophily and heterophily graphs. In Section 8.2 we will compare model performance under four different dynamics to illustrate how D-HFD dynamic assists model in handling two issues. Furthermore, Section 8.3 will show the performance (node classification) of MHKG and G-MHKG via real-world citation networks as well as large-scale graph dataset. We include more discussions (i.e., computational complexity) and ablation studies in Appendix D. All experiments were conducted using PyTorch on Tesla V100 GPU with 5,120 CUDA cores and 16GB HBM2 mounted on an HPC cluster. The source code can be found in [https://anonymous.4open.science/r/G_MHKG_accept](https://anonymous.4open.science/r/G_MHKG_accept).
Experiment SetupFor the settings in MHKG, we followed form of MHKG defined in Eq. (3) by setting \(f(\mathbf{\Lambda})=\mathbf{U}(\mathrm{e}^{\mathbf{\Lambda}})\mathbf{U}^{\top}\). We note that in general the form of \(f\) can be any _monotonic positive_ function on \(\mathbf{\Lambda}\), here we only choose the one in Eq. (3) for the consistency reason. In regarding to G-MHKG, we assign one initial warm-up coefficient \(\gamma>1\) that is multiplied with \(f(\widehat{\mathbf{L}})\) if the graph is homophily otherwise on \(g(\widehat{\mathbf{L}})\) if a graph is heterophily. We note that this operation aims to
Figure 2: The figure on the left represents different types of HFD filtering outcomes and the trade-off between two issues. One can check that to induce more sharpening (filtering function from bottom to top), the model will suffer more from OSQ. The figure on the right illustrates the situation described in Theorem 3.
ensure G-MHKG to induce more smoothing/sharpening effect for fitting different types of graphs according to Remark 1. We then re-scale the result of the filtering functions back to \([0,2]\) so that the assumption of Lemma 2 holds. We included **Cora, Citeseer, Pubmed** for homophily datasets and applied public split [25] and **Wisconsin, Texas, Cornell** as heterophily datasets with 60% for training, 20% for testing and validation. In addition, we also included one large-scale graph dataset **ogbn-arxiv** to illustrate model scalability for large scale datasets. The summary statistics of all included benchmarks as well as the model hyper-parameter search space are included in Appendix D.1. We set the maximum number of epochs of 200 for citation networks and 500 for **ogbn-arxiv**. The average test accuracy and its standard deviation come from 10 runs.
### Controlled Model Dynamic
In this section, we verify Theorem 1 by assigning different quantities of \(\zeta\) so that G-MHKG can be L/HFD. Specifically, we fixed \(\theta_{1}=\mathbf{1}_{N}\), \(\mathbf{2}_{N}\), and \(\mathbf{3}_{N}\) and set the value of \(\zeta\) from 0.5 to 3 with the unit of change as 0.5 so that model dynamics changed from HFD to LFD (i.e., \(\zeta=0.5\)). All other hyperparameters were fixed across the models. We conducted the experiment on **Cora** and **Texas**, and Fig. 3 shows the changes in the learning accuracy on both datasets. It is clear to see that with the increase of \(\zeta=\theta_{1}/\theta_{2}\), model's dynamics change from LFD to HFD, resulting in more adaption power from homophily to heterophily graphs.
### Handling both issues with D-HFD and ablation
In this section, we show how G-MHKG with D-HFD dynamic handles both over-smoothing and over-squashing issues. Specifically, we compare G-MHKG's performance via the following four dynamics: LFD (\(\theta_{1}>\theta_{2}>0\)), HFD + increase in OSQ (sufficient large of \(\theta_{2}\)), D-HFD + increase in OSQ (\(\theta_{2}\) sufficient large and first \(k\) components of both \(\theta_{1}\) and \(\theta_{2}\) are 0) and D-HFD + decrease in
Figure 4: Results on model with different dynamics. Number of layers from 8, 16, 32, 64, and 96.
Figure 3: Model Accuracy(%) via different ratios between filtering matrices. Left: Accuracy on **Cora**, Right: Accuracy on **Texas**.
OSQ (Theorem 2). We include more details on the model setup for inducing the aforementioned dynamics in Appendix D.2. We select two homophily graphs (**Cora** with \(k=25\) and **Citeseer**\(k=115\)) and heterophily graphs (**Cornell**\(k=6\) and **Wisconsin**\(k=16\)). We tested G-MHKG with \(8,16,32,64\) and \(96\) layers to illustrate the asymptotic behavior of the model via different dynamics. We note that this experiment can also be served as an ablation study which proves the advantage of D-HFD for heterophily graphs. Fig. 4 illustrates the accuracy comparison between four dynamics. One can see from Fig. 4 that for homophily graphs, LFD (blue) models always present the top performances compared to other three dynamics, followed by the dynamic of D-HFD + decreasing OSQ (violet) as such dynamic assists model to handle over-squashing issue. The last two ranks are achieved by D-HFD + increase OSQ (orange) and HFD + increase OSQ (green) with the former dynamic not increase OSQ with the first \(k\) eigenvalues and latter suffering from both issues. For heterophily graphs, with the best accuracy achieved via D-HFD + decreased OSQ across all layers, and the worst outcome in LFD which is unnecessary for homophily graphs.
### Results of Node classification on real-world data
We include the introduction of the baseline models and the reason of choosing them in Appendix D.1. We design MHKG and G-MHKG with two convolution layers followed by softmax activation function. The results of node classification are included in Table 1, and all baseline results are listed according to the existing publications. One can check that G-MHKG show remarkable performance on both homophily and heterophily graphs.
### Discussion on Limitation and ways forward
Limitation on L/HFD, why there is an accuracy drop?We found that there is a considerable accuracy drop between the results in Table. 1 and model with four different dynamics and different layers. This observation suggests that although L/HFD are proved to be more suitable for homo/heterophily graphs, under large number of layers, i.e., GNNs propagates feature information from large hops, the measure of homophily level [45] become powerless, since such level varies through hops and thus requiring GNNs to induce a layer-wise rather than overall dynamics when the layer number is high. This observation aligns with the motivation of the work in [21].
The measure of OSQLemma 2 establishes an upper bound for the node sensitivity measure on the issue of over-squashing. While Lemma 2 offers a necessary condition, it does not guarantee that G-MHKG possesses the desired mixing power. Hence, a complementary lower bound that
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Methods & Cora & Citeseer & Pubmed & Cornell & Texas & Wisconsin & Arxiv \\ \hline MLP & 55.1 & 59.1 & 71.4 & **91.3\(\pm\)0.7** & **92.3\(\pm\)0.7** & **91.8\(\pm\)3.1** & 55.0\(\pm\)0.3 \\ GCN & 81.5\(\pm\)0.5 & 70.9\(\pm\)0.5 & 79.0\(\pm\)0.3 & 66.5\(\pm\)13.8 & 75.7\(\pm\)1.0 & 66.7\(\pm\)1.4 & **72.7\(\pm\)0.3** \\ GAT & 83.0\(\pm\)0.7 & **72.0\(\pm\)0.7** & 78.5\(\pm\)0.3 & 76.0\(\pm\)1.0 & 78.8\(\pm\)0.9 & 71.0\(\pm\)4.6 & 72.0\(\pm\)0.5 \\ GIN & 78.6\(\pm\)1.2 & 71.4\(\pm\)1.1 & 76.9\(\pm\)0.6 & 78.0\(\pm\)1.9 & 74.6\(\pm\)0.8 & 72.9\(\pm\)2.5 & 64.5\(\pm\)2.5 \\ HKGCN & 81.9\(\pm\)0.9 & 72.4\(\pm\)0.4 & 79.9\(\pm\)0.3 & 74.2\(\pm\)2.1 & 82.4\(\pm\)0.7 & 85.5\(\pm\)2.7 & 69.6\(\pm\)1.7 \\ GRAND & 82.9\(\pm\)1.4 & 70.8\(\pm\)1.1 & 79.2\(\pm\)1.5 & 72.2\(\pm\)3.1 & 80.2\(\pm\)1.5 & 86.4\(\pm\)2.7 & 71.2\(\pm\)0.2 \\ UFG & **83.3\(\pm\)0.5** & 71.0\(\pm\)0.6 & 79.4\(\pm\)0.4 & 83.2\(\pm\)0.3 & 82.3\(\pm\)0.9 & **91.9\(\pm\)2.1** & **72.6\(\pm\)0.1** \\ SJLR & 81.3\(\pm\)0.5 & 70.6\(\pm\)0.4 & 78.0\(\pm\)0.3 & 71.9\(\pm\)1.9 & 80.1\(\pm\)0.9 & 66.9\(\pm\)2.1 & 72.0\(\pm\)0.4 \\ \hline MHKG & 82.8\(\pm\)0.2 & 71.6\(\pm\)0.1 & 78.9\(\pm\)0.3 & 86.2\(\pm\)0.6 & 84.5\(\pm\)0.3 & 88.9\(\pm\)0.3 & 72.1\(\pm\)0.6 \\ G-MHKG & **83.5\(\pm\)0.2** & **72.8\(\pm\)0.2** & **80.1\(\pm\)0.4** & **90.2\(\pm\)0.9** & **89.6\(\pm\)0.6** & 91.2\(\pm\)1.5 & 72.4\(\pm\)0.3 \\ \hline \end{tabular}
\end{table}
Table 1: Performance on node classification using public split. Top two in **bold**.
provides a sufficient condition becomes essential. Thereby in scenarios where both over-smoothing and over-squashing need to be considered, an adjusted conclusion is sought after.
## 9 Conclusion
In this paper, we explored the underlying relationship between three fundamental issues of GNNs: over-smoothing, over-squashing, and expressive power via proposed G-MHKG induced by reversing the time direction of so-called graph heat equation. We revealed the roles of the filtering and weight matrices via gradient flow and maximal mixing perspectives to illustrate their capability of controlling aforementioned issues. Furthermore, we show that under mild conditions on the filtering equations in G-MHKG, there is a fundamental trade-off between over-smoothing and over-squashing via graph spectral domain. We further showed that our proposed model is capable of handling both issues and own its advantage in terms of mixing smoothing and sharpening effects compared to single-scale GNNs. While we have shown superior performance of G-MHKG empirically, many unknown issues we have listed still inspire us to explore further. In future works, we will attempt to discover the necessary conditions (such as lower bound of \(\mathbf{S}\) and more complex model dynamic other than L/HFD) for one GNN that is capable of handling all mentioned issues.
|
2305.08190 | TSGN: Temporal Scene Graph Neural Networks with Projected Vectorized
Representation for Multi-Agent Motion Prediction | Predicting future motions of nearby agents is essential for an autonomous
vehicle to take safe and effective actions. In this paper, we propose TSGN, a
framework using Temporal Scene Graph Neural Networks with projected vectorized
representations for multi-agent trajectory prediction. Projected vectorized
representation models the traffic scene as a graph which is constructed by a
set of vectors. These vectors represent agents, road network, and their spatial
relative relationships. All relative features under this representation are
both translationand rotation-invariant. Based on this representation, TSGN
captures the spatial-temporal features across agents, road network,
interactions among them, and temporal dependencies of temporal traffic scenes.
TSGN can predict multimodal future trajectories for all agents simultaneously,
plausibly, and accurately. Meanwhile, we propose a Hierarchical Lane
Transformer for capturing interactions between agents and road network, which
filters the surrounding road network and only keeps the most probable lane
segments which could have an impact on the future behavior of the target agent.
Without sacrificing the prediction performance, this greatly reduces the
computational burden. Experiments show TSGN achieves state-of-the-art
performance on the Argoverse motion forecasting benchmar. | Yunong Wu, Thomas Gilles, Bogdan Stanciulescu, Fabien Moutarde | 2023-05-14T15:58:55Z | http://arxiv.org/abs/2305.08190v1 | TSGN: Temporal Scene Graph Neural Networks with Projected Vectorized Representation for Multi-Agent Motion Prediction
###### Abstract
Predicting future motions of nearby agents is essential for an autonomous vehicle to take safe and effective actions. In this paper, we propose TSGN, a framework using Temporal Scene Graph Neural Networks with projected vectorized representations for multi-agent trajectory prediction. Projected vectorized representation models the traffic scene as a graph which is constructed by a set of vectors. These vectors represent agents, road network, and their spatial relative relationships. All relative features under this representation are both translation- and rotation-invariant. Based on this representation, TSGN captures the spatial-temporal features across agents, road network, interactions among them, and temporal dependencies of temporal traffic scenes. TSGN can predict multimodal future trajectories for all agents simultaneously, plausibly, and accurately. Meanwhile, we propose a Hierarchical Lane Transformer for capturing interactions between agents and road network, which filters the surrounding road network and only keeps the most probable lane segments which could have an impact on the future behavior of the target agent. Without sacrificing the prediction performance, this greatly reduces the computational burden. Experiments show TSGN achieves state-of-the-art performance on the Argoverse motion forecasting benchmark.
## I Introduction
Predicting the motion of multiple agents is an essential step in autonomous driving. It helps autonomous driving vehicles to plan their actions and prevent accidents. However, the traffic scene is highly complex. It contains agents, road network, and interactions among them. The prediction model needs to take these entities as inputs and outputs reasonable multimodal trajectories that the target agent could take in the future.
The traffic scene is viewed differently from the perspective of different agents. However, most existing methods select one target agent each time and take it as the central reference to perform the coordinate transformation on the entire scene, which is not symmetric for the other agents and results in only one prediction at a time. It is inefficient and not robust to the coordinate transformation. To alleviate these problems, HiVT [1] takes a symmetric way to encode the traffic scene. It divides the traffic scene into a set of local regions, each corresponding to one agent and centered on it, and uses a local-global mechanism to encode agents' features.
We adopt the symmetrical representation of HiVT and its local-global mechanisms. Based on them, we propose an extended scene representation method: Projected vectorized representation and an optimized prediction framework: TSGN. Our representation method introduces more features about agents and lane segments. It projects all relative spatial features between entities into the direction of the target entity for every timestamp, such that, all these relative features under this representation are also both translation- and rotation-invariant. TSGN consists of diverse interaction modules and temporal dependencies modules. These modules are constructed based on the transformer structure. TSGN treats the traffic scene in its entirety and encodes the agents' dynamics, surrounding lane segments, and interactions among them across time steps. In addition, we present a Hierarchical Lane Transformer module inside TSGN to capture the influence of the road network on the future motion of target agents. Unlike most approaches that use the full structural information of the surrounding road network, the Hierarchical Lane Transformer only selects the most probable lane segments which could have an impact on the future behavior of the target agent. It greatly reduces the computation burden and enables the model to do faster predictions without sacrificing the prediction performance.
Our contributions are summarized as follow:
* We extend the scene representation method of HiVT by adding new features about agents, lane segments, and their spatial relative relationships. We adopt a projected vectorized representation for all relative features of the traffic scene for every timestamp, which describes the relative spatial relationships in a more detailed way.
* We propose a Hierarchical Lane Transformer which only selects lane segments that affect the future behaviors of target agents the most for modeling the interactions, enabling much faster and equally accurate predictions.
* Our designed TSGN can make plausible and accurate predictions. We validate the performance of TSGN on Argoverse forecasting benchmark. It ranked \(1^{st}\) in terms of minADE on the leaderboard on September 13, 2022.
## II Related Work
Recently deep learning-based models have brought great progress to the motion prediction. According to the different representation methods, these learning-based models can be divided into rasterized-representation-based models and vectorized-representation-based models. |
2308.00242 | Circumvent spherical Bessel function nulls for open sphere microphone
arrays with physics informed neural network | Open sphere microphone arrays (OSMAs) are simple to design and do not
introduce scattering fields, and thus can be advantageous than other arrays for
implementing spatial acoustic algorithms under spherical model decomposition.
However, an OSMA suffers from spherical Bessel function nulls which make it
hard to obtain some sound field coefficients at certain frequencies. This paper
proposes to assist an OSMA for sound field analysis with physics informed
neural network (PINN). A PINN models the measurement of an OSMA and predicts
the sound field on another sphere whose radius is different from that of the
OSMA. Thanks to the fact that spherical Bessel function nulls vary with radius,
the sound field coefficients which are hard to obtain based on the OSMA
measurement directly can be obtained based on the prediction. Simulations
confirm the effectiveness of this approach and compare it with the rigid sphere
approach. | Fei Ma, Thushara D. Abhayapala, Prasanga N. Samarasinghe | 2023-08-01T02:50:32Z | http://arxiv.org/abs/2308.00242v1 | Circumvent Spherical Bessel Function Nulls for Open Sphere Microphone Arrays with Physics Informed Neural Network
###### Abstract
Open sphere microphone arrays (OSMAs) are simple to design and do not introduce scattering fields, and thus can be advantageous than other arrays for implementing spatial acoustic algorithms under spherical model decomposition. However, an OSMA suffers from spherical Bessel function nulls which make it hard to obtain some sound field coefficients at certain frequencies. This paper proposes to assist an OSMA for sound field analysis with physics informed neural network (PINN). A PINN models the measurement of an OSMA and predicts the sound field on another sphere whose radius is different from that of the OSMA. Thanks to the fact that spherical Bessel function nulls vary with radius, the sound field coefficients which are hard to obtain based on the OSMA measurement directly can be obtained based on the prediction. Simulations confirm the effectiveness of this approach and compare it with the rigid sphere approach.
**Keywords:**_Microphone array signal processing, physics informed neural network, spherical harmonics._
Fei Ma & Thushara D. Abhayapala & Prasanga N. Samarasinghe College of Engineering, Computing \(\&\) Cybernetics, Australian National University, Australia
Footnote 1: _Copyright_: ©2023 _This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 Unported License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited._
## 1 Introduction
The products of the spherical harmonics (SHs) and the spherical Bessel functions (or the spherical Hankel functions) form the spherical nodes [1], a complete and orthogonal function set for the Helmholtz equation, the governing partial differential equation (PDE) of acoustic wave propagation. The SH decomposition of a sound field (the angular dependent SHs, the radial dependent spherical Bessel functions, and the frequency dependent sound field coefficients) greatly facilitates its analysis and manipulation [1, 2, 3]. Thus, spherical modal decomposition has become popular in many diverse spatial acoustic applications, such as spatial active noise control [4, 5, 6], beamforming [3, 7, 8], and direction of arrival estimation [9, 10, 11].
Due to their simplicity, open sphere microphone arrays (OSMAs) are intuitively chosen for implementing the spherical modal decomposition [7]. However, the spherical Bessel function nulls make it hard to obtain some order of the sound field coefficients at certain frequencies with an OSMA. We can mitigate this problem through arranging microphones on a rigid sphere [12], inside a spherical shell, or using vector sensors on an open sphere [3, 13, 14]. However, those approaches will unavoidably introduce scattering fields, request more microphones, and significantly increase the cost, respectively.
In this paper, we propose to circumvent the problem of spherical Bessel function nulls for an OSMA with the help of physics informed neural network (PINN) [15, 16, 17], a neural work which incorporates physical knowledge into its architecture and training. We model the measurement of an OSMA with a PINN, and then use it to predict the sound field on another sphere whose radius is different from that of the OSMA. Thanks to the fact that the spherical Bessel function nulls vary with radius, we can obtain the sound field coefficients which are difficult to obtain with the OSMA measurement based on the predicted sound field. The effectiveness of this approach is confirmed by simulations and compared with the rigid sphere approach.
## 2 Problem Formulation
We consider the set up shown in Fig. 1, where there are \(Q\) omni-directional pressure microphones on an open sphere \(\mathbb{S}_{2}\) of radius \(r_{a}\) and some sound sources. The Cartesian coordinates and the spherical coordinates of a point with respect to an origin are denoted as \(O\) as \((x,y,z)\) and \((r,\theta,\phi)\), respectively [3]. One would like to reconstruct the sound field around the sphere or locate the sound sources based on the OSMA measurement.
The tasks could be approached with SH decomposition. We decompose the sound pressure at microphone position \(\{(r_{a},\theta_{q},\phi_{q})\}_{q=1}^{Q}\) onto SHs as [1]
\[P(\omega,r_{a},\theta_{q},\phi_{q}) \approx \sum_{u=0}^{U}\sum_{v=-u}^{u}\mathsf{P}_{u,v}(\omega,r_{a})Y_{u,v }(\theta_{q},\phi_{q}) \tag{1}\] \[= \sum_{u=0}^{U}\sum_{v=-u}^{u}\mathsf{K}_{u,v}(\omega)j_{u}( \omega r_{a}/s)\] \[\times Y_{u,v}(\theta_{q},\phi_{q}),\]
where \(\omega=2\pi f\) is the angular frequency (\(f\) is the frequency), \(s\) is the speed of sound propagation, \(U=\lceil 2\pi fr_{a}/s\rceil\) is the up-order of the SHs that are needed to represent the sound pressure accurately [18] (\(\lceil\cdot\rceil\) is the ceiling operation), \(\mathsf{P}_{u,v}(\omega,r_{a})\) are the pressure field coefficients, \(\mathsf{K}_{u,v}(\omega)\) are the sound field coefficients [1], \(j_{u}(\cdot)\) is the spherical Bessel function of the first kind of order \(u\), \(Y_{u,v}(\theta,\phi)\) is the SH of order \(u\) and degree \(v\)[1] at is evaluated at \((\theta,\phi)\).
The sound field coefficients \(\mathsf{K}_{u,v}(\omega)\) characterize the sound sources and allow us to reconstruct the sound field or to locate the sound sources [3]. To obtain the sound field coefficients, we first estimate the pressure field coefficients through [3]
\[\hat{\mathsf{P}}_{u,v}(\omega,r_{a})=\sum_{q=1}^{Q}P(\omega,r_{a},\theta_{q},\phi_{q})Y_{u,v}(\theta_{q},\phi_{q})\gamma_{q}, \tag{2}\]
where \(\{\gamma_{q}\}_{q=1}^{Q}\) are the sampling weights, and then estimate the sound field coefficients through
\[\hat{\mathsf{K}}_{u,v}(\omega) = \hat{\mathsf{P}}_{u,v}(\omega,r_{a})/j_{u}(\omega r_{a}/s). \tag{3}\]
The problem with (3) is the spherical Bessel function \(j_{u}(\cdot)\) nulls [3, 13, 14]. Figure 2 presents \(j_{u}(2\pi fr_{a}/s)\) with \(r_{a}=0.05\) m, \(s=343\) m/s, \(u=0,1,2,3,4\). We can see that \(j_{0}(2\pi fr_{a}/s)=0\) for \(f=3430\) Hz, and \(j_{1}(2\pi fr_{a}/s)=0\) for \(f=4905\) Hz. This makes it difficult to estimate the sound field coefficients of order 0, \(\mathsf{K}_{0,0}(\omega)\), and order 1, \(\mathsf{K}_{1,v}(\omega)\), at frequency 3430 Hz and 4905 Hz with an OSMA array of radius \(r_{a}=0.05\) m.
In this paper, we aim to circumvent the problem of spherical Bessel function nulls for an OSMA.
## 3 PINN Assisted OSMA
In this section, we propose a PINN method to assist an OSMA for sound field analysis. To simplify the calculation of the Laplacian, we express acoustic quantities in Cartesian coordinates.
The key idea is to exploit the fact that spherical Bessel function nulls vary with radius. The spherical Bessel function \(j_{u}(\cdot)\) is a function of both frequency \(f\) and radius \(r\), and thus that \(j_{u}(2\pi fr_{b}/s)\neq 0\) if \(r_{b}\neq r_{a}\) and
Figure 1: A microphone array on an open sphere and some sound sources \(\star\).
Figure 2: The spherical Bessel function \(j_{u}(2\pi fr_{a}/s)\) as a function of frequency, \(r_{a}=0.05\) m, \(s=343\) m/s, \(u=0,1,2,3,4\).
\(j_{u}(2\pi fr_{a}/s)=0\). Thus that we can obtain the sound field coefficients which are difficult to obtain with an OSMA of radius \(r_{a}\) based on the sound field on another sphere of radius \(r_{b}\). An OSMA of radius \(r_{a}\) can not measure the sound field on another sphere of radius \(r_{b}\) directly, but we can build a PINN [15, 16, 17] to predict the sound field on the other sphere based on the measurement of the OSMA.
We build up a \(L\) layer \(N\) node (on each layer) full connected feedforward neural network [15] whose inputs are Cartesian coordinates \((x,y,z)\) and output is the sound field estimation \(\hat{P}_{\rm PI}(\omega,x,y,z)\), and update the trainable parameters of the network by minimizing the following cost function
\[\mathcal{L}=\underbrace{\frac{1}{Q}\sum_{q=1}^{Q}\|P(\omega,x_{q}, y_{q},z_{q})-\hat{P}_{\rm PI}(\omega,x_{q},y_{q},z_{q})\|_{2}^{2}}_{\mathcal{E}_{ \rm data}}\] \[+\underbrace{\frac{1}{A}\sum_{a=1}^{A}\|\frac{\nabla\hat{P}_{\rm PI }(\omega,x_{a},y_{a},z_{a})}{(w/s)^{2}}+\hat{P}_{\rm PI}(\omega,x_{a},y_{a},z_ {a})\|_{2}^{2}}_{\mathcal{E}_{\rm PDE}}, \tag{4}\]
where \(\|\cdot\|_{2}\) is the 2-norm, \(\nabla\equiv\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y ^{2}}+\frac{\partial^{2}}{\partial z^{2}}\) is the Laplacian. The data loss \(\mathcal{E}_{\rm data}\) makes the network output to approximate the OSMA measurement \(\{P(\omega,x_{q},y_{q},z_{q})\}_{q=1}^{Q}\) where \((x_{q},y_{q},z_{q})_{q=1}^{Q}\) correspond to \((r_{a},\theta_{q},\phi_{q})_{q=1}^{Q}\). The PDE loss \(\mathcal{E}_{\rm PDE}\) informs the network output to conform with the Helmholtz equation on the measurement sphere of radius \(r_{a}\), where \(\{(x_{a},y_{a},z_{a})\}_{a=1}^{A}\) are uniformly arranged sampling points on the sphere.
To obtain the sound field coefficients, we first train the PINN and use it to estimate the pressure \(\hat{P}_{\rm PI}(\omega,x_{d},y_{d},z_{d})\) (which are equal to \(\hat{P}_{\rm PI}(\omega,r_{b},\theta_{d},\phi_{d})\)) on a sphere of radius \(r_{b}\). Next we estimate the pressure field coefficients \(\hat{\rm P}_{u,v}(\omega,r_{b})\)
\[\hat{\rm P}_{u,v}(\omega,r_{b})=\sum_{d=1}^{D}\hat{P}_{\rm PI}( \omega,r_{b},\theta_{d},\phi_{d})Y_{u,v}(\theta_{d},\phi_{d})\gamma_{d}, \tag{5}\]
where \(\{\gamma_{d}\}_{d=1}^{D}\) are the sampling weights [3]. We further estimate the sound field coefficients through
\[\hat{\rm K}_{u,v}(\omega){=}\hat{\rm P}_{u,v}(\omega,r_{b})/j_{u}( \omega r_{b}/s). \tag{6}\]
In summary, for spatial acoustics with an OSMA, we can estimate the sound field coefficients through (3) when \(j_{u}(kr_{a})\neq 0\) and through (4), (5), (6) when \(j_{u}(kr_{a})=0\). In this way, the problem of spherical Bessel function nulls is circumvented.
Note that the spherical Bessel function nulls is a problem under the spherical modal decomposition, but it is not a problem with the PINN. This is the fundamental fact that make the PINN assisted OSMA sound field analysis possible.
## 4 Simulation
In this section, we use a sound field reconstruction task to demonstrate the performance of the PINN assisted OSMA and compare it with the rigid sphere approach.
We consider the setup shown in Fig. 1. There is a radius \(r_{a}=0.05\) m OSMA with 36 uniformly arranged omni-directional pressure microphones on it. There is a sound source located at \((0.5\ \mathrm{m},0.5\ \mathrm{m},0.75\ \mathrm{m})\). The sound source generates a unit amplitude sinusoidal signal at \(f=3430\) Hz. In the case, the up-order of SHs needed to represent the sound field is \(U=\lceil 2\pi\times 3430\times 0.05/343\rceil\) = 4 [18]. The transfer functions between the sound source and the microphones are simulated based on the Green's function [1]. The aim is to reconstruct the sound field on a smaller sphere of radius \(r_{c}=0.04\) m.
Three approaches for sound field reconstruction are considered. The first is the OSMA approach based on the spherical modal decomposition. For this approach, we estimate the sound field coefficients \(\{\hat{\rm K}_{u,v}(\omega)\}_{u=1}^{4}\) through (3) and reconstruct the sound field on the smaller sphere by
\[\hat{P}_{\rm SH}(\omega,r_{c},\theta,\phi){\approx}\sum_{u=1}^{U} \sum_{v=-u}^{u}\hat{\rm K}_{u,v}(\omega)j_{u}(\omega r_{c}/s)Y_{u,v}(\theta, \phi), \tag{7}\]
because \(\hat{\rm K}_{0,0}(\omega)\) is not obtainable.
The second one is the PINN assisted OSMA method. For this method, we build up a \(L=3\) layer and \(N=3\) node PINN, with the activation function being \(\tanh\), and initialize the trainable parameters with the Xavier initialization [19]. PINN is trained for 10\({}^{8}\) epochs with a learning rate of 10\({}^{-5}\) using the ADAM optimizer. The data loss \(\mathcal{L}_{\rm data}\) is evaluated with respect to the 36 microphone measurements, and the PDE losses \(\mathcal{L}_{\rm PDE}\) with respect to the Cartesian coordinates of 500 uniformly arranged sampling points on the sphere of radius \(r_{a}=0.05\) m. We first estimate the sound field coefficients \(\{\hat{\rm K}_{u,v}(\omega)\}_{u=1}^{4}\) through (3), \(\hat{\rm K}_{0,0}(\omega)\) through (4), (5) with (6) with \(r_{b}=0.048\) m, and next reconstruct the sound field on the smaller sphere
similar as (7) but with \(\hat{K}_{0,0}(\omega)j_{u}(\omega r_{c}/s)Y_{0,0}(\theta,\phi)\) included.
The third one is the rigid sphere approach. The OSMA in Fig. 1 is replaced with a rigid sphere of the same radius, and the rest of simulation setting is the same. we reconstruct the sound field on the sphere of radius \(r_{c}\) as
\[P(\omega,r_{c},\theta,\phi) \approx \sum_{u=0}^{U}G_{u}(\omega,r_{c},r_{a}) \tag{8}\] \[\times \sum_{v=-u}^{u}\hat{\mathbb{P}}_{u,v}(\omega,r_{a})Y_{u,v}(\theta,\phi),\]
where the pressure field coefficients \(\hat{\mathbb{P}}_{u,v}(\omega,r_{a})\) are obtained similar to (2), \(G_{u}(\omega,r_{c},r_{a})\) is the radial translator [3]
\[G_{u}(\omega,r_{c},r_{a}) = \frac{h^{\prime}_{u}(\omega r_{a}/s)j_{u}(\omega r_{c}/s)}{j_{u}( \omega r_{a}/s)h^{\prime}_{u}(\omega r_{a}/s)-j^{\prime}_{u}(\omega r_{a}/s)h _{n}(\omega r_{a}/s)},\]
\(j_{u}(\cdot)\) and \(h_{u}(\cdot)\) are the spherical Bessel function of the first kind and the spherical Hankel function of the second kind, respectively, and \(j^{\prime}_{u}(\cdot)\) and \(h^{\prime}_{u}(\cdot)\) are corresponding derivatives with respect to argument.
We denote the reconstruction error as
\[\epsilon=\frac{\sum_{d=1}^{100}||P(\omega,r_{c},\theta_{d},\phi_{d})-\hat{P}( \omega,r_{c},\theta_{d},\phi_{d})||_{2}^{2}}{\sum_{d=1}^{100}||P(\omega,r_{c}, \theta_{d},\phi_{d})||_{2}^{2}}, \tag{10}\]
Figure 4: Sound field at \(f=4905\) Hz : (a) the ground truth, (b) OSMA reconstruction, (c) PINN assisted OSMA reconstruction, and (d) the rigid sphere reconstruction.
Figure 3: Sound field at \(f=3430\) Hz : (a) the ground truth, (b) OSMA reconstruction, (c) PINN assisted OSMA reconstruction, and (d) the rigid sphere reconstruction.
where \(P(\omega,r_{c},\theta_{d},\phi_{d})\) and \(\hat{P}(\omega,r_{c},\theta_{d},\phi_{d})\) are the true pressure and its reconstruction at 100 uniformly selected sampling positions \((\theta_{d},\phi_{d})_{d=1}^{100}\).
Real part of the ground truth and its reconstructions by three methods is shown in Fig. 3. Comparing Fig. 3 (b) and (a), we can see that the sound field component \(K_{0,0}(\omega)j_{u}(\omega r_{c}/s)Y_{0,0}(\theta,\phi)\) missing the OSMA approach is unable to accurately reconstruct the ground truth. Comparing Fig. 3 (c), (d) and (a), we can see that with the PINN assisted OSMA approach and the rigid sphere approach are able to accurately reconstruct the sound field. The reconstruction errors of the OSMA approach, the PINN assisted OSMA approach, and the rigid sphere approach are -8.5 dB, -28.4 dB and -29.3 dB, respectively. The simulation results of three approaches for reconstructing the imaginary part of the ground truth are similar to Fig. 3, and thus are not shown for brevity.
The simulations is repeated at \(f=4095\) Hz. We arrange the sound source at \((0.5\ \mathrm{m},-0.5\ \mathrm{m},-0.75\ \mathrm{m})\) and the rest of simulation settings are the same as the \(f=3430\) Hz case. Real part of the ground truth and its reconstructions by three methods is shown in Fig. 4. The reconstruction errors of the OSMA approach, the PINN assisted OSMA approach, and the rigid sphere approach are -10.2 dB, -31.6 dB and -32.3 dB, respectively.
The PINN-assisted OSMA approach performs comparably to the rigid sphere approach. Nonetheless, the rigid sphere approach has a drawback: the scattering effect. In Fig. 5, we present the scattering field around the rigid sphere at 3430 Hz. For nearfield applications, where the rigid sphere is placed close to some object, the scattering field will further undergo multiple scatterings and is highly undesirable [20]. The PINN assisted OSMA approach, on the other hand, does not suffer from the scattering problems.
Figure 6 presents the sound field reconstruction error \(\epsilon\) of a pure PINN method as a function of reconstruction sphere radius \(r_{c}\) at 3430 Hz and 4095 Hz. In this case, the sound field is reconstructed based on the PINN prediction directly and only, and does not go through the SH decomposition process. From Fig. 6, we can see that the reconstruction error \(\epsilon\) is small when the reconstruction sphere radius is close to the array radius, or \(r_{c}\approx r_{a}=0.05\) m, and increases when they are not close \(r_{c}\not\approx r_{a}=0.05\) m. It is interesting that the reconstruction error \(\epsilon\) decreases again when the reconstruction sphere radius is small, or \(r_{c}<0.025\) m. From a spherical modal decomposition point of view, this can be explained by the fact that when the reconstruction sphere radius is small less number of SH coefficients are needed to describe the sound field.
## 5 Conclusion
In this paper, we proposed to assist an OSMA for sound field analysis with the PINN. Under SH decomposition, the OSMA suffers from the spherical Bessel function nulls and is unable to obtain some orders of sound field coefficients at certain frequencies. We use a PINN to predict the sound field on a sphere whose radius is different from that of the OSMA, and obtain those order of sound field coefficients based on the PINN prediction. The performance of this approach is comparable with the rigid sphere approach and does not introduce the scattering field.
Figure 5: The scattering field around a rigid sphere at 3430 Hz.
Figure 6: Sound field reconstruction error \(\epsilon\) of the pure PINN method as a function of the reconstruction sphere radius \(r\) at 3430 Hz and 4095 Hz.
## 6 Acknowledgments
We thank Hanwen Bi from Australian National University for reviewing the simulation code, which has lead to significant performance improvement of the simulation results.
|
2303.12534 | Inexact iterative numerical linear algebra for neural network-based
spectral estimation and rare-event prediction | Understanding dynamics in complex systems is challenging because there are
many degrees of freedom, and those that are most important for describing
events of interest are often not obvious. The leading eigenfunctions of the
transition operator are useful for visualization, and they can provide an
efficient basis for computing statistics such as the likelihood and average
time of events (predictions). Here we develop inexact iterative linear algebra
methods for computing these eigenfunctions (spectral estimation) and making
predictions from a data set of short trajectories sampled at finite intervals.
We demonstrate the methods on a low-dimensional model that facilitates
visualization and a high-dimensional model of a biomolecular system.
Implications for the prediction problem in reinforcement learning are
discussed. | John Strahan, Spencer C. Guo, Chatipat Lorpaiboon, Aaron R. Dinner, Jonathan Weare | 2023-03-22T13:07:03Z | http://arxiv.org/abs/2303.12534v3 | Inexact iterative numerical linear algebra for neural network-based spectral estimation and rare-event prediction
###### Abstract
Understanding dynamics in complex systems is challenging because there are many degrees of freedom, and those that are most important for describing events of interest are often not obvious. The leading eigenfunctions of the transition operator are useful for visualization, and they can provide an efficient basis for computing statistics such as the likelihood and average time of events (predictions). Here we develop inexact iterative linear algebra methods for computing these eigenfunctions (spectral estimation) and making predictions from a data set of short trajectories sampled at finite intervals. We demonstrate the methods on a low-dimensional model that facilitates visualization and a high-dimensional model of a biomolecular system. Implications for the prediction problem in reinforcement learning are discussed.
## 1 Introduction
Modern observational, experimental, and computational approaches often yield high-dimensional time series data (trajectories) for complex systems. In principle, these trajectories contain rich information about dynamics and, in particular, the infrequent events that are often most consequential. In practice, however, high-dimensional trajectory data are often difficult to parse for useful insight. The need for more efficient statistical analysis tools for trajectory data is critical, especially when the goal is to understand rare-events that may not be well represented in the data.
We consider dynamics that can be treated as Markov processes. A common starting point for statistical analyses of Markov processes is the transition operator, which describes the evolution of function expectations. The eigenfunctions of the transition operator characterize the most slowly decorrelating features (modes) of the system [1, 2, 3, 4, 5]. These can be used for dimensionality reduction to obtain a qualitative understanding of the dynamics [6, 7], or they can be used as the starting point for further computations [8, 9, 10]. Similarly, prediction functions, which provide information about the likelihood and timing of future events as a function of the current state, are defined through linear equations of the transition operator[11, 10].
A straightforward numerical approach to obtaining these functions is to convert the transition operator to a matrix by projecting onto a finite basis for Galerkin approximation[1, 2, 10, 11, 12, 13, 14, 15]. The performance of such a linear approximation depends on the choice of basis [11, 10, 15], and previous work often resorts to a set of indicator functions on a partition of the state space (resulting in a Markov state model or MSM [14]) for lack of a better choice. While Galerkin approximation has yielded many insights [16, 17], the limited expressivity of the basis expansion has stimulated interest in (nonlinear) alternatives.
In particular, artificial neural networks can be harnessed to learn eigenfunctions of the transition operator and prediction functions from data [18, 19, 20, 21, 22, 23, 24, 25]. However, existing approaches based on neural networks suffer from various drawbacks. As discussed in Ref. [5], their performance can often be very sensitive to hyperparameters, requiring extensive tuning and varying with random initialization. Many use loss functions that are estimated against the stationary distribution [25, 26, 27, 28, 29, 30], so that metastable states contribute most heavily, which negatively impacts performance [24, 30]. Assumptions about the dynamics (e.g., microscopic reversibility) limit applicability. In Ref. [24] we introduced an approach that overcomes the issues above, but it uses multiple trajectories from each initial condition; this limits the approach to analysis of simulations and moreover requires specially prepared data sets.
The need to compute prediction functions from observed trajectory data also arises in reinforcement
learning. There the goal is to optimize an expected future reward (the prediction function) over a policy (a Markov process). For a fixed Markov process, the prediction problem in reinforcement learning is often solved by temporal difference (TD) methods, which allow the use of arbitrary ensembles of trajectories without knowledge of the details of the underlying dynamics[31]. TD methods have a close relationship with an inexact form of power iteration, which, as we describe, can perform poorly on rare-event related problems.
Motivated by this relationship, as well as by an inexact power iteration scheme previously proposed for approximating the stationary probability distribution of a Markov process using trajectory data [32], we propose a computational framework for spectral estimation and rare-event prediction based on inexact iterative numerical linear algebra. Our framework includes an inexact Richardson iteration for the prediction problem, as well as an extension to inexact subspace iteration for the prediction and spectral estimation problems. The theoretical properties of exact subspace iteration suggest that eigenfunctions outside the span of the approximation will contribute significantly to the error of our inexact iterative schemes[33]. Consistent with this prediction, we demonstrate that learning additional eigenvalues and eigenfunctions simultaneously through inexact subspace iteration accelerates convergence dramatically relative to inexact Richardson and power iteration in the context of rare events. While we assume the dynamics can be modeled by a Markov process, we do not require knowledge of their form or a specific underlying model. The method shares a number of further advantages with the approach discussed in Ref. [24] without the need for multiple trajectories from each initial condition in the data set. This opens the door to treating a wide range of observational, experimental, and computational data sets.
The remainder of the paper is organized as follows. In Section 2, we describe the quantities that we seek to compute in terms of linear operators. In Sections 3 and 4, we introduce an inexact subspace iteration algorithm that we use to solve for these quantities. Section 5 illustrates how the loss function can be tailored to the known properties of the desired quantity. Section 6 summarizes the two test systems that we use to illustrate our methods: a two-dimensional potential, for which we can compute accurate reference solutions, and a molecular example that is high-dimensional but still sufficiently tractable that statistics for comparison can be computed from long trajectories. In Section 7, we explain the details of the invariant subspace iteration and then demonstrate its application to our two examples. Lastly, Section 8 details how the subspace iteration can be modified to compute prediction functions and compares the effect of different loss functions, as well as the convergence properties of power iteration and subspace iteration. We conclude with implications for reinforcement learning.
## 2 Spectral estimation and the prediction problem
We have two primary applications in mind in this article. First, we would like to estimate the _dominant eigenfunctions and eigenvalues_ of the transition operator \(\mathcal{T}^{t}\) for a Markov process \(X^{t}\in\mathbb{R}^{d}\), defined as
\[\mathcal{T}^{t}f(x)=\mathbb{E}_{x}\left[f(X^{t})\right], \tag{1}\]
where \(f\) is an arbitrary real-valued function and the subscript \(x\) indicates the initial condition \(X^{0}=x\). The transition operator encodes how expectations of functions evolve in time. The right eigenfunctions of \(\mathcal{T}^{t}\) with the largest eigenvalues characterize the most slowly decorrelating features (modes) of the Markov process [1, 2, 4, 5].
Our second application is to compute _prediction functions_ of the general form
\[u(x)=\mathbb{E}_{x}\left[\Psi(X^{T})+\sum_{t=0}^{T-1}\Gamma(X^{t})\right], \tag{2}\]
where \(T\) is the first time \(X^{t}\notin D\) from some domain \(D\), and \(\Psi\) and \(\Gamma\) are functions associated with the escape event (rewards in reinforcement learning). Prototypical examples of prediction functions that appear in our numerical results are the mean first passage time (MFPT) from each \(x\):
\[m(x)=\mathbb{E}_{x}\left[T\right] \tag{3}\]
and the committor:
\[q(x)=\mathbb{E}_{x}\left[\mathbbm{1}_{B}(X^{T})\right]=\mathbb{P}_{x}\left[X ^{T}\in B\right], \tag{4}\]
where \(A\) and \(B\) are disjoint sets ("reactant" and "product" states) and \(D=(A\cup B)^{\mathsf{c}}\). The MFPT is important for estimating rates of transitions between regions of state space, while the committor can serve as a reaction coordinate[29, 34, 35, 36] and
as a key ingredient in transition path theory statistics [24; 37; 38]. For any \(\tau>0\), the prediction function \(u(x)\) satisfies the linear equation
\[\left(\mathcal{I}-\mathcal{S}^{\tau}\right)u(x)=\mathbb{E}_{x}\left[\sum_{t=0}^ {\left(\tau\wedge T\right)-1}\Gamma(X^{t})\right] \tag{5}\]
for \(x\in D\), with boundary condition
\[u(x)=\Psi(x) \tag{6}\]
for \(x\notin D\). In (5), \(\mathcal{I}\) is the identity operator and
\[\mathcal{S}^{t}f(x)=\mathbb{E}_{x}\left[f(X^{t\wedge T})\right] \tag{7}\]
is the "stopped" transition operator [10]. We use the notation \(\tau\wedge T=\min\{\tau,T\}\).
Our specific goal is to solve both the eigenproblem and the prediction problem for \(X^{t}\) in high dimensions and without direct access to a model governing its evolution. Instead, we have access to trajectories of \(X^{t}\) of a fixed, finite duration \(\tau\). A natural and generally applicable approach to finding an approximate solution to the prediction problem is to attempt to minimize the residual of (5) over parameters \(\theta\) of a neural network \(u_{\theta}(x)\) representing \(u(x)\). For example, we recently suggested an algorithm that minimizes the residual norm
\[\frac{1}{2}\Big{\|}\left(\mathcal{I}-\mathcal{S}^{\tau}\right)u_{\theta}-r \Big{\|}_{\mu}^{2}, \tag{8}\]
where \(r(x)\) is the right-hand side of (5) and \(\mu\) is an arbitrary distribution of initial conditions \(X^{0}\) (boundary conditions were enforced by an additional term) [24]. The gradient of the residual norm in (8) with respect to neural-network parameters \(\theta\) can be written
\[\left\langle\left(\mathcal{I}-\mathcal{S}^{\tau}\right)u_{\theta}-r,\nabla_{ \theta}u_{\theta}\right\rangle_{\mu}-\left\langle\left(\mathcal{I}-\mathcal{S }^{\tau}\right)u_{\theta}-r,\mathcal{S}^{\tau}\nabla_{\theta}u_{\theta}\right \rangle_{\mu} \tag{9}\]
where \(\left\langle f,g\right\rangle_{\mu}=\int f(x)g(x)\,\mu(dx).\) Given a data set of initial conditions \(\left\{X_{j}^{0}\right\}_{j=1}^{n}\) drawn from \(\mu\) and a single sample trajectory \(\left\{X_{j}^{t}\right\}_{t=0}^{\tau}\) of \(X^{t}\) for each \(X_{j}^{0}\), the first term in the gradient (9) can be approximated without bias as
\[\left\langle\left(\mathcal{I}-\mathcal{S}^{\tau}\right)u_{\theta}-r,\nabla_{ \theta}u_{\theta}\right\rangle_{\mu}\approx\frac{1}{n}\sum_{j=1}^{n}\left(u_{ \theta}(X_{j}^{0})-u_{\theta}(X_{j}^{\tau\wedge T_{j}})-\sum_{t=0}^{(\tau \wedge T_{j})-1}\Gamma(X_{j}^{t})\right)\nabla_{\theta}u_{\theta}(X_{j}^{0}) \tag{10}\]
where \(T_{j}\) is the first time \(X_{j}^{t}\notin D\).
In contrast, unbiased estimation of the second term in (9) requires access to two independent trajectories of \(X^{t}\) for each sample initial condition since it is quadratic in \(\mathcal{S}^{\tau}\)[24; 31]. In the reinforcement learning community, TD methods were developed for the purpose of minimizing a loss of a very similar form to (8), and they perform a "semigradient" descent by following only the first term in (9) [31]. As in the semigradient approximation, the algorithms proposed in this paper only require access to inner products of the form \(\left\langle f,\mathcal{A}g\right\rangle_{\mu}\) for an operator \(\mathcal{A}\) related to \(\mathcal{T}^{\tau}\) or \(\mathcal{S}^{\tau}\), and they avoid terms that are nonlinear in \(\mathcal{A}\). As we explain, such inner products can be estimated using trajectory data. However, rather than attempting to minimize the residual directly by an approximate gradient descent, we view the eigenproblem and prediction problems through the lens of iterative numerical linear algebra schemes.
## 3 An Inexact Power Iteration
To motivate the iterative numerical linear algebra point of view, observe that the first term in (9) is the exact gradient with respect to \(\theta^{\prime}\) of the loss
\[\frac{1}{2}\big{\|}u_{\theta^{\prime}}-\mathcal{S}^{\tau}u_{\theta}-r\big{\|}_ {\mu}^{2}, \tag{11}\]
evaluated at \(\theta^{\prime}=\theta\). The result of many steps of gradient descent (later, "inner iterations") on this loss with \(\theta\) held fixed can then be viewed as an inexact Richardson iteration for (5), resulting in a sequence
\[u_{\theta^{\prime+1}}\approx\mathcal{S}^{\tau}u_{\theta^{\prime}}+r, \tag{12}\]
where, for each \(s\), \(u_{\theta^{s}}\) is a parametrized neural-network approximation of the solution to (5). To unify our discussion of the prediction and spectral estimation problems, it is helpful to observe that (12) can be recast as an equivalent inexact power iteration:
\[\bar{u}_{\theta^{s+1}}\approx\mathcal{A}\bar{u}_{\theta^{s}} \tag{13}\]
where
\[\bar{u}_{\theta}=\left(\begin{array}{c}1\\ u_{\theta}\end{array}\right)\quad\text{and}\quad\mathcal{A}=\left[\begin{array} []{cc}1&0\\ r&\mathcal{S}^{\tau}\end{array}\right]. \tag{14}\]
Note that \(\left(1,u\right)^{\top}\) is the dominant eigenfunction of \(\mathcal{A}\) and has eigenvalue equal to \(1\).
Ref. [32] introduced an inexact power iteration to compute the stationary probability measure of \(\mathcal{T}^{\tau}\), i.e., its dominant left eigenfunction. As those authors note, an inexact power update such as (13) can be enforced using a variety of loss functions. In our setting, the \(L^{2}_{\mu}\) norm in (11) can be replaced by any other measure of the difference between \(u_{\theta^{\prime}}\) and \(\mathcal{S}^{\tau}u_{\theta}+r\), as long as an unbiased estimator of its gradient with respect to \(\theta^{\prime}\) is available. This flexibility is discussed in more detail in Section 5, and we exploit it in applications presented later in this article. For now, we focus instead on another important implication of this viewpoint: the flexibility in the form of the iteration itself.
We will see that when the spectral gap of \(\mathcal{A}\), the difference between its largest and second largest eigenvalues, is small, the inexact power iteration (or the equivalent Richardson iteration) described above fails to reach an accurate solution. The largest eigenvalue of \(\mathcal{A}\) in (14) is \(1\) and its next largest eigenvalue is the dominant eigenvalue of \(\mathcal{S}^{\tau}\). For rare-event problems the difference between these two eigenvalues is expected to be very small. Indeed, when \(X^{0}\) is drawn from the quasi-stationary distribution of \(X^{t}\) in \(D\), the logarithm of the largest eigenvalue of \(\mathcal{S}^{\tau}\) is exactly \(-\tau/\operatorname{\mathbb{E}}[T]\)[39]. Thus, when the mean exit time from \(D\) is large, we can expect the spectral gap of \(\mathcal{A}\) in (14) to be very small. In this case, classical convergence results tell us that exact power iteration converges slowly, with the largest contributions to the error coming from the eigenfunctions of \(\mathcal{S}^{\tau}\) with largest magnitude eigenvalues [33]. Iterative schemes that approximate multiple dominant eigenfunctions simultaneously, such as subspace iteration and Krylov methods, can converge much more rapidly[33]. In the next section, we introduce an inexact form of subspace iteration to address this shortcoming. Beyond dramatically improving performance on the prediction problem for rare-events when applied with \(\mathcal{A}\) in (14), it can also be applied with \(\mathcal{A}=\mathcal{T}^{\tau}\) to compute multiple dominant eigenfunctions and values of \(\mathcal{T}^{\tau}\) itself.
## 4 An Inexact Subspace Iteration
Our inexact subspace iteration for the \(k\) dominant eigenfunctions/values of \(\mathcal{A}\) proceeds as follows. Let \(\{\varphi_{\theta^{s}}^{a}\}_{a=1}^{k}\) be a sequence of \(k\) basis functions parametrized by \(\theta^{s}\) (these can be scalar or vector valued functions depending on the form of \(\mathcal{A}\)). We can represent this basis as the vector valued function
\[U_{\theta^{s}}=\left(\varphi_{\theta^{s}}^{1},\varphi_{\theta^{s}}^{2},\ldots, \varphi_{\theta^{s}}^{k}\right). \tag{15}\]
Then, we can obtain a new set of \(k\) basis functions by approximately applying \(\mathcal{A}\) to each of the components of \(U_{\theta^{s}}\):
\[U_{\theta^{s+1}}K^{s+1}\approx\mathcal{A}\,U_{\theta^{s}} \tag{16}\]
where \(K^{s+1}\) is an invertible, upper-triangular \(k\times k\) matrix that does not change the span of the components of \(U_{\theta^{s+1}}\) but is included to facilitate training. One way the approximate application of \(\mathcal{A}\) can be accomplished is by minimizing
\[\frac{1}{2}\sum_{a=1}^{k}\!\left\|\sum_{b=1}^{k}\varphi_{\theta}^{b}K_{ba}- \mathcal{A}\,\varphi_{\theta^{s}}^{a}\right\|_{\mu}^{2} \tag{17}\]
over \(\theta\) and \(K\) with \(\theta^{s}\) held fixed.
The eigenvalues and eigenfunctions of \(\mathcal{A}\) are then approximated by solving the finite-dimensional generalized eigenproblem
\[C^{\tau}W=C^{0}W\Lambda \tag{18}\]
where
\[C^{\tau}_{ab} =\langle\varphi_{\theta^{s}}^{a},\mathcal{A}\varphi_{\theta^{s}}^{ b}\rangle_{\mu} \tag{19}\] \[C^{0}_{ab} =\langle\varphi_{\theta^{s}}^{a},\varphi_{\theta^{s}}^{b}\rangle_{ \mu}, \tag{20}\]
each inner product is estimated using trajectory data, and \(W\) and \(\Lambda\) are \(k\times k\) matrices. The matrix \(\Lambda\) is diagonal, and the \(a\)-th eigenvalue \(\lambda_{a}\) of \(\mathcal{A}\) is approximated by \(\Lambda_{aa}\); the corresponding eigenfunction \(v_{a}\) is approximated by \(\sum_{b=1}^{k}W_{ab}\,\varphi_{\theta^{s}}^{b}\).
Even when sampling is not required to estimate the matrices in (19) and (20), the numerical rank of \(C^{\tau}\) becomes very small as the eigenfunctions become increasingly aligned with the single dominant eigenfunction. To overcome this issue, we apply an
orthogonalization step between iterations (or every few iterations). Just as the matrices \(C^{0}\) and \(C^{\tau}\) can be estimated using trajectory data, the orthogonalization procedure can also be implemented approximately using data.
Finally, in our experiments we find it advantageous to damp large parameter fluctuations during training by mixing the operator \(\mathcal{A}\) with a multiple of the identity, i.e., we perform our inexact subspace iteration on the operator \((1-\alpha_{s})\mathcal{I}+\alpha_{s}\mathcal{A}\) in place of \(\mathcal{A}\) itself. This new operator has the same eigenfunctions as \(\mathcal{A}\). In our experiments, decreasing the parameter \(\alpha_{s}\) as the number of iterations increases results in better generalization properties of the final solution and helps ensure convergence of the iteration. For our numerical experiments we use
\[\alpha_{s}=\begin{cases}1&s<\sigma\\ 1/\sqrt{s+1-\sigma}&s\geq\sigma\end{cases} \tag{21}\]
where \(\sigma\) is a user chosen hyperparameter that sets the number of iterations performed before damping begins.
The details, including estimators for all required inner products, in the case of the eigenproblem (\(\mathcal{A}=\mathcal{T}^{\tau}\)) are given in Section 7 and Algorithm 1. For the prediction problem with \(\mathcal{A}\) as in (14), they are given in Section 8 and Algorithm 2.
## 5 Alternative Loss Functions
As mentioned above, the inexact application of the operator \(\mathcal{A}\) can be accomplished by minimizing loss functions other than (17). The key requirement for a loss function in the present study is that \(\mathcal{A}\) appears in its gradient only through terms of the form \(\left\langle f,\mathcal{A}g\right\rangle_{\mu}\) for some functions \(f\) and \(g\), so that the gradient can be estimated using trajectory data. As a result, we have flexibility in the choice of loss and, in turn, the representation of \(u\). In particular, we consider the representation \(u_{\theta}=\zeta(z_{\theta})\), where \(\zeta\) is an increasing function, and \(z_{\theta}\) is a function parameterized by a neural network. An advantage of doing so is that the function \(\zeta\) can restrict the output values of \(u_{\theta}\) to some range. For example, when computing a probability such as the committor, a natural choice is the sigmoid function \(\zeta(x)=(1+e^{-x})^{-1}\).
Our goal is to train a sequence of parameter values so that \(u_{\theta^{s}}\) approximately follows a subspace iteration, i.e., so that \(\zeta(z_{\theta^{s+1}})\approx\mathcal{A}u_{\theta^{s}}\). To this end, we minimize with respect to \(\theta\) the loss function
\[\mathbb{E}_{X^{0}\sim\mu}\left[V(z_{\theta})-z_{\theta}\mathcal{A}u_{\theta^{ s}}\right], \tag{22}\]
where \(V\) is an antiderivative of \(\zeta\), and \(\theta^{s}\) is fixed. The subscript \(X^{0}\sim\mu\) in this expression indicates that \(X^{0}\) is drawn from \(\mu\). Note that, as desired, \(\mathcal{A}\) appears in the gradient of (22) only in an inner product of the required form, and an approximate minimizer, \(\theta^{s+1}\), of this loss satisfies \(\zeta(z_{\theta^{s+1}})\approx\mathcal{A}u_{\theta^{s}}\). This general form of loss function is adapted from variational expressions for the divergence of two probability measures [32, 40].
The \(L_{\mu}^{2}\) loss in (17), which we use in several of our numerical experiments, corresponds to the choice \(\zeta(x)=x\) and \(V(x)=x^{2}/2\). The choice of \(\zeta(x)=(1+e^{-x})^{-1}\) mentioned above corresponds to \(V(x)=\log(1+e^{x})\); we refer to the loss in (22) with this choice of \(V\) as the "softplus" loss. We note that in the context of committor function estimation, in the limit of infinite lag time the "softplus" loss corresponds directly to the maximum log-likelihood loss (for independent Bernoulli random variables) that defines the logistic regression estimate of the probability of reaching \(B\) before \(A\). Logistic regression has previously been used in conjunction with data sets of trajectories integrated all the way until reaching \(A\) or \(B\) to estimate the committor function[41, 42, 43, 44, 45, 46].
## 6 Test Problems
We illustrate our methods with two well-characterized systems that exemplify features of molecular transitions. In this section, we provide key details of these systems.
Figure 1: Müller-Brown potential energy surface. The orange and red ovals indicate the locations of states \(A\) and \(B\) respectively when computing predictions. Contour lines are drawn every 1 \(\beta^{-1}\).
### Muller-Brown potential
The first system is defined by the Muller-Brown potential [47] (Figure 1):
\[V_{\rm MB}(y,z) =\frac{1}{20}\sum_{i=1}^{4}C_{i}\exp[a_{i}(y-y_{i})^{2}\] \[+b_{i}(y-y_{i})(z-z_{i})+c_{i}(z-z_{i})^{2}]. \tag{23}\]
The two-dimensional nature of this model facilitates visualization. The presence of multiple minima and saddlepoints that are connected by a path that does not align with the coordinate axes makes the system challenging for both sampling and analysis methods. In Sections 7.7.1 and 8.8.1, we use \(C_{i}=\{-200,-100,-170,15\}\), \(a_{i}=\{-1,-1,-6.5,0.7\}\), \(b_{i}=\{0,0,11,0.6\}\), \(c_{i}=\{-10,-10,-6.5,0.7\}\), \(y_{i}=\{1,-0.27,-0.5,-1\}\), \(z_{i}=\{0,0.5,1.5,1\}\). In Section 8.8.2, we tune the parameters to make transitions between minima rarer; there, the parameters are \(C_{i}=\{-250,-150,-170,15\}\), \(a_{i}=\{-1,-3,-6.5,0.7\}\), \(b_{i}=\{0,0,11,0.6\}\), \(c_{i}=\{-10,-30,-6.5,0.7\}\), \(y_{i}=\{1,-0.29,-0.5,-1\}\), \(z_{i}=\{0,0.5,1.5,1\}\). For prediction, we analyze transitions between the upper left minimum (\(A\)) and the lower right minimum (\(B\)) in Figure 1; these states are defined as
\[A =\{y,z:6.5(y+0.5)^{2}-11(y+0.5)(z-1.5)+6.5(z-1.5)^{2}<0.3\} \tag{24}\] \[B =\{y,z:(y-0.6)^{2}+0.5(z-0.02)^{2}<0.2\}.\]
To generate a data set, we randomly draw 50,000 initial conditions \(X_{j}^{0}\) uniformly from the region
\[\Omega=\{y,z:-1.5<y<1.0,\ -0.5<z<2.5,\ V_{\rm MB}(y,z)<12\} \tag{25}\]
and, from each of these initial conditions, generate one trajectory according to the discretized over-damped Langevin dynamics
\[X_{j}^{t}=X_{j}^{t-1}-\delta_{t}\,\nabla V_{\rm MB}(X_{j}^{t-1})+\sqrt{\delta _{t}\,2\beta^{-1}}\,\xi_{j}^{t} \tag{26}\]
where \(1\leq t\leq\tau\), the \(\xi_{j}^{t}\) are independent standard Gaussian random variables, and the timestep is \(\delta_{t}=0.001\). We use an inverse temperature of \(\beta=2\), and we vary \(\tau\) as indicated below (note that \(\tau\) is an integer number of steps, but we present our results for the Muller-Brown model in terms of the dimensionless scale used for \(\delta_{t}\) immediately above). For each test, we use independent random numbers and redraw the initial conditions because it is faster to generate the trajectories than to read them from disk in this case. All error bars are computed from the empirical standard deviation over 10 replicate data sets.
To validate our results, we compare the neural-network results against grid-based references, computed as described in Section S4 of the Supplementary Material of Ref. [11] and the Appendix of Ref. [48] (our notation here follows the former more closely). The essential idea is that the terms in the infinitesimal generator of the transition operator can be approximated on a grid to second order in the spacing by expanding both the probability for transitions between sites and a test function. To this end, we define the transition matrix for neighboring sites \(x=(y,z)\) and \(x^{\prime}=(y\pm\delta_{x},z)\) or \((y,z\pm\delta_{x})\) on a grid with spacings \(\delta_{x}\):
\[P(x^{\prime}|x)=\frac{1}{1+e^{\beta[V(x^{\prime})-V(x)]}} \tag{27}\]
(this definition differs from that in Ref. [11] by a factor of \(1/4\), and we scale \(P-I\), where \(I\) is the identity matrix, accordingly to set the time units below). The diagonal entry \(P(x|x)\) is fixed to make the transition matrix row-stochastic. We use \(\delta_{x}=0.0125\); the grid is a rectangle with \(y\in[-1.5,1.0]\), and \(z\in[-0.5,2.0]\). The reference invariant subspaces are computed by diagonalizing the matrix \(2(P-I)/\beta\delta_{x}^{2}\) with a sparse eigensolver; we use scipy.sparse.linalg. We obtain the reference committor \(\hat{q}_{+}\) for grid sites in \((A\cup B)^{\epsilon}\) by solving
\[{\rm diag}(\hat{1}_{(A\cup B)^{\epsilon}}) (P-I)\hat{q}_{+}\] \[=-{\rm diag}(\hat{1}_{(A\cup B)^{\epsilon}})(P-I)\hat{1}_{B} \tag{28}\]
and for grid sites in \(A\cup B\) by setting \(\hat{q}_{+}=\hat{1}_{B}\). Here, \(\hat{q}_{+}\) and \(\hat{1}_{B}\) are vectors corresponding to the functions evaluated at the grid sites.
### \(\text{AIB}_{9}\) helix-to-helix transition
The second system is a peptide of nine \(\alpha\)-aminoisobutyric acids (\(\text{AIB}_{9}\); Figure 2). Because AIB is achiral around its \(\alpha\)-carbon atom, \(\text{AIB}_{9}\) can form left- and right-handed helices with equal probabilities, and we study the transitions between these two states. This transition was previously studied using MSMs and long unbiased molecular dynamics simulations [49; 50]. \(\text{AIB}_{9}\) poses a stringent test due to the presence of many metastable intermediate states.
The states are defined in terms of the internal \(\phi\) and \(\psi\) dihedral angles. We classify an amino acid as being in the "left" state if its dihedral angle values are within a circle of radius \(25^{\circ}\) centered at \((41^{\circ},43^{\circ})\), that is
\[(\phi-41^{\circ})^{2}+(\psi-43^{\circ})^{2}\leq(25^{\circ})^{2}.\]
Amino acids are classified as being in the "right" state using the same radius, but centered instead at \((-41^{\circ},-43^{\circ})\). States \(A\) and \(B\) are defined by the amino acids at sequence positions 3-7 being all left or all right, respectively. We do not use the two residues on each end of \(\text{AIB}_{9}\) in defining the states as these are typically more flexible [50]. The states can be resolved by projecting onto dihedral angle principal components (dPCs; Figure 2, right) as described previously [51].
Following a procedure similar to that described in Ref. [50], we generate a data set of short trajectories. From each of the 691 starting configurations in Ref. [50], we simulate 10 trajectories of duration 20 ns with initial velocities drawn randomly from a Maxwell-Boltzmann distribution at a temperature of 300 K. The short trajectory data set thus contains 6,910 trajectories, corresponding to a total sampling time of 138.2 \(\mu\)s. We use a timestep of 4 fs together with a hydrogen mass repartitioning scheme [52], and configurations are saved every 40 ps. We employ the AIB parameters from Forcefield_NCAA [53] and the GBNeck2 implicit-solvent model [54]. Simulations are performed with the Langevin integrator in OpenMM 7.7.0 [55] using a friction coefficient of \(1\,\text{ps}^{-1}\). To generate a reference for comparison to our results, we randomly select 20 configurations from the data set above and, from each, run a simulation of 5 \(\mu\)s with the same simulation parameters. For all following tests on \(\text{AIB}_{9}\), batches consist of pairs of frames separated by \(\tau\) drawn randomly with replacement from the short trajectories (i.e., from all possible such pairs in the data set). From each frame, we use only the atom positions because the momenta decorrelate within a few picoseconds, which is much shorter than the lag times that we consider. However, in principle, the momenta could impact the prediction functions [56] and be used as neural-network inputs as well.
## 7 Spectral estimation
In this section, we provide some further numerical details for the application of our method to spectral estimation and demonstrate the method on the test problems. For our subspace iteration, we require estimators for inner products of the form \(\langle f,\mathcal{T}^{\tau}g\rangle_{\mu}\). For example, the gradient of the loss function (17) involves inner products of the form
\[\left\langle\nabla_{\theta}\varphi_{\theta}^{a},\mathcal{T}^{\tau}\varphi_{ \theta}^{b}\right\rangle_{\mu}. \tag{29}\]
For these, we use the unbiased data-driven estimator
\[\left\langle f,\mathcal{T}^{\tau}g\right\rangle_{\mu}\approx\frac{1}{n}\sum_{ j=1}^{n}f(X_{j}^{0})g(X_{j}^{\tau}). \tag{30}\]
As discussed in Section 4, applying the operator \(\mathcal{T}^{\tau}\) repeatedly causes each basis function to converge to the dominant eigenfunction and leads to
Figure 2: Helix-to-helix transition of \(\text{AIB}_{9}\). (left) Left- and right-handed helices, which we use as state \(A\) and \(B\), respectively, when computing predictions. Carbon, nitrogen, and oxygen atoms are shown in yellow, blue, and red, respectively; hydrogen atoms are omitted. (right) Potential of mean force constructed from the histogram of value pairs of the first two dihedral angle principal components; data are from the 20 trajectories of 5 \(\mu\)s that we use to construct reference statistics (see text). The left-handed helix corresponds to the left-most basin, and the right-handed helix corresponds to the right-most basin. Contour lines are drawn every 2 \(\beta^{-1}\) corresponding to a temperature of 300 K.
numerical instabilities. To avoid this, we orthogonalize the outputs of the networks with a QR decomposition at the end of each subspace iteration by constructing the matrix \(\Phi_{ia}=\varphi_{\theta}^{a}(X_{s}^{s})\) and then computing the factorization \(\Phi=QR\) where \(Q\) is an \(n\times k\) matrix with orthogonal columns and \(R\) is an upper triangular \(k\times k\) matrix. Finally, we set \(\tilde{\varphi}_{s}^{a}=\sum_{b=1}^{k}\varphi_{\theta}^{b}\left(R^{-1}N\right)_{ ba}\), where \(N\) is a diagonal matrix with entries equal to the norms of the columns of \(\Phi\) (before orthogonalization). To ensure that the networks remain well-separated (i.e., the eigenvalues of \(C^{0}\) remain away from zero), we penalize large off-diagonal entries of \(K\) by adding to the loss
\[\gamma_{1}\|K-\text{diag}(K)\|_{\text{F}}^{2}, \tag{31}\]
where \(\gamma_{1}\) allows us to tune the strength of this term relative to others, and \(\|\cdot\|_{\text{F}}\) is the Frobenius norm. We control the scale of each network output using the strategy from Ref. [32]; that is, we add to the loss a term of the form
\[\gamma_{2}\sum_{a=1}^{k}(2\nu_{a}(\langle\varphi_{\theta}^{a},\varphi_{\theta} ^{a}\rangle_{\mu}-1)-\nu_{a}^{2}), \tag{32}\]
where we have introduced the conjugate variables \(\nu_{a}\) which we maximize with gradient ascent (or similar optimization). In general, our numerical experiments suggest that it is best to keep \(\gamma_{1}\) and \(\gamma_{2}\) relatively small. We find that stability of the algorithm over many subspace iterations is improved if the matrix \(K\) is set at its optimal value before each inner loop. To do this, we set
\[K_{1:i,i}=\arg\min_{c}\left\|\sum_{a=1}^{i}\varphi_{\theta}^{a}c_{a}-\mathcal{ T}^{\tau}\tilde{\varphi}_{s}^{a}\right\|+\gamma_{2}\sum_{a=1}^{i-1}c_{a}^{2}. \tag{33}\]
The above minimization can be solved with linear least squares. Finally, we note that in practice any optimizer can be used for the inner iteration steps, though the algorithm below implements stochastic gradient descent. In this work, we use Adam[57] for all numerical tests. We summarize our procedure for spectral estimation in Algorithm 1.
### Muller-Brown model
As a first test of our method, we compute the \(k=3\) dominant eigenpairs for the Muller-Brown model. Since we know that the dominant eigenfunction of the transition operator is the constant function \(v_{1}(y,z)=1\) with eigenvalue \(\lambda_{1}=1\), we directly include this function in the basis as a non-trainable function, i.e. \(\varphi_{\theta}^{1}(y,z)=1.\) To initialize \(\tilde{\varphi}_{1}^{a}\) for each \(a>1\), we choose a standard Gaussian vector \((Y^{a},Z^{a})\in\mathbb{R}^{2}\), and set \(\tilde{\varphi}_{1}^{a}(y,z)=y\,Y^{a}+z\,Z^{a}\). This ensures that the initial basis vectors are well-separated and the first QR step is numerically stable. Here and in all subsequent Muller-Brown tests, batches of trajectories are drawn from the entire data set with replacement. Other hyperparameters are listed in Table 1.
Figure 3 shows that we obtain good agreement between the estimate produced by the inexact subspace iteration in Algorithm 1 and reference eigenfunctions. Figure 4 (upper panels) shows how the corresponding eigenvalues vary with lag time; again there is good agreement with the reference. Furthermore, there is a significant gap between \(\lambda_{3}\) and \(\lambda_{4}\), indicating that a three-dimensional subspace captures the dynamics of interest for this system.
We compare the subspace that we obtain from our method with that from an MSM constructed from the same amount of data by using \(k\)-means to cluster the configurations into 400 states and counting the transitions between clusters. This is a very fine discretization for this system, and the MSM is sufficiently expressive to yield eigenfunctions in good agreement with the reference. The relative error of \(1-\lambda_{2}\) is comparable for the two methods (Figure 4, lower left). To compare two finite dimensional subspaces, \(\mathcal{U}\) and \(\mathcal{V}\), we define the subspace distance as[4]
\[d(\mathcal{U},\mathcal{V})=\left\|(I-P_{\mathcal{U}})P_{\mathcal{V}}\right\|_ {\text{F}}, \tag{34}\]
where \(P_{\mathcal{U}}\) and \(P_{\mathcal{V}}\) denote the orthogonal projections onto \(\mathcal{U}\) and \(\mathcal{V}\), respectively, and \(\|\cdot\|_{\text{F}}\) is the Frobenius norm. Figure 4 (lower right) shows the subspace distances from the reference as functions of lag time. We see that the inexact subspace iteration better approximates the three-dimensional dominant eigenspace for moderate to long lag times, even though the eigenvalues are comparable.
### Aib\({}_{\text{9}}\)
For the molecular test system, we compute the dominant five-dimensional subspace. We compare the inexact subspace iteration in Algorithm 1 with MSMs constructed on dihedral angles ("dihedral MSM") and on Cartesian coordinates ("Cartesian MSM"). We expect the dihedral MSM to be accurate given that the dynamics of AIB\({}_{\text{9}}\) are well-described by the backbone dihedral angles [49, 50], and we thus
use it as a reference. It is constructed by clustering the sine and cosine of each of the backbone dihedral angles (\(\phi\) and \(\psi\)) for the nine residues (for a total of \(2\times 2\times 9=36\) input features) into 1000 clusters using \(k\)-means and counting transitions between clusters. The Cartesian MSM is constructed by similarly counting transitions between 1000 clusters from the \(k\)-means algorithm, but the clusters are based on the Cartesian coordinates of all non-hydrogen atoms after aligning the backbone atoms of the trajectories, for a total of 174 input features. Because of the difficulty of clustering high-dimensional data, we ex
pect the Cartesian MSM basis to give poor results. The neural network for the inexact subspace iteration receives the same 174 Cartesian coordinates as input features. We choose to use Cartesian coordinates rather than dihedral angles as inputs because it requires the network to identify nontrivial representations for describing the dynamics.
As in the Muller-Brown example, we use \(\varphi_{\theta}^{1}=1\) and a random linear combination of coordinate functions to initialize \(\bar{\varphi}_{1}^{a}\) for \(a>1\). Other hyperparameters are listed in Table 1. With these choices, the neural-network training typically requires about 20 minutes on a single NVIDIA A40 GPU; this is much longer than the time required for diagonalization of the \(1000\times 1000\) MSM transition matrix, which is nearly instantaneous. However, the time for neural-network training is negligible compared with the time to generate the data set, which is the same for both approaches.
Taking the dihedral MSM as a reference, the Cartesian MSM systematically underestimates the eigenvalues (Figure 5). The subspace iteration is very accurate for the first four eigenvalues but the estimates for the fifth are low and vary considerably from run to run. A very small gap between \(\lambda_{4}\) and \(\lambda_{5}\) may contribute to the difficulty in estimating \(\lambda_{5}\). In Figure 6, we plot the first two non-trivial eigenfunctions (\(v_{2}\) and \(v_{3}\)), which align with the axes of the dPC projection. The eigenfunction \(v_{2}\) corresponds to the transition between the left- and right-handed helices; the eigenfunction \(v_{3}\) is nearly orthogonal to \(v_{2}\) and corresponds to transitions between intermediate states. It is challenging to visualize the remaining two eigenfunctions by projecting onto the first two dPCs because \(v_{4}\) and \(v_{5}\) are orthogonal to \(v_{2}\) and \(v_{3}\). The estimates for \(v_{2}\) are in qualitative agreement for all lag times tested (Figure 6 shows results for \(\tau\) corresponding to 40 ps), but the subspace iteration results are less noisy for the shortest lag times. Moreover, the estimate for \(v_{3}\) from subspace iteration agrees more closely with that from the dihedral MSM than does the estimate for \(v_{3}\) from the Cartesian MSM. The subspace distance for \(v_{2}\) and \(v_{3}\) between the subspace iteration and the dihedral MSM is 0.947, compared with a value of 0.969 for the subspace distance between the two MSMs. Together, our results indicate that the neural networks are able to learn the leading eigenfunctions and eigenvalues of the transition operator (dynamical modes) of this system despite being presented with coordinates that are not the natural ones for describing the dynamics.
## VIII Prediction
Inexact subspace iteration for \(\mathcal{A}\) in (14) is equivalent to performing the inexact Richardson iteration in (12) on the first basis function \(\varphi_{\theta}^{1}\) and then performing an inexact subspace iteration for the operator \(\mathcal{S}^{\tau}\) on the rest of the basis functions. The iteration requires unbiased estimators of the forms
\[\langle f,\mathcal{S}^{\tau}g\rangle_{\mu}\approx\frac{1}{n}\sum_{j=1}^{n}f(X_ {j}^{0})g(X_{j}^{\tau\wedge T_{j}}) \tag{35}\]
and
\[\langle f,r\rangle_{\mu}\approx\frac{1}{n}\sum_{j=1}^{n}f(X_{j}^{0})\sum_{t=0 }^{(\tau\wedge T_{j})-1}\Gamma(X_{j}^{t}), \tag{36}\]
where \(T_{j}\) is the first time \(X_{j}^{t}\) enters \(D^{\mathsf{c}}\) and \(r\) is the right-hand side of (5), as previously.
The Richardson iterate, \(\varphi_{\theta}^{1}\), must satisfy the boundary condition \(\varphi_{\theta}^{1}(x)=\Psi(x)\) for \(x\notin D\). The other basis functions should satisfy \(\varphi_{\theta}^{a}(x)=0\) for \(x\notin D\). In practice, we enforce the boundary conditions by explicitly setting \(\varphi_{\theta}^{1}(x)=\Psi(x)\) and \(\varphi_{\theta}^{a}(x)=0\) for \(a>1\) when \(x\notin D\).
When the boundary condition is zero, as for the MFPT, we find an approximate solution of the form
\[u_{\theta}=\sum_{a=1}^{k}w_{a}\varphi_{\theta}^{a} \tag{37}\]
Figure 3: First two non-trivial eigenfunctions of the Müller-Brown model. (top) Grid-based reference. (bottom) Neural network subspace after ten subspace iteration steps, computed with \(\tau=300\) steps (i.e., \(0.3\,\delta_{\mathrm{t}}^{-1}\)).
by solving the \(k\)-dimensional linear system
\[\left(C^{0}-C^{\tau}\right)w=p \tag{38}\]
where, for \(a,b\geq 1\),
\[C_{ab}^{t}=\langle\varphi_{\theta}^{a},\mathcal{S}^{t}\varphi_{\theta}^{b} \rangle_{\mu} \tag{39}\]
for \(t=0,\tau\), and
\[p_{a}=\left\langle\varphi_{\theta}^{a},\mathbb{E}_{x}\left[\rho(X)\right] \right\rangle_{\mu}. \tag{40}\]
Figure 4: Spectral estimation as a function of lag time (in units of \(\delta_{t}^{-1}\)) for the Müller-Brown model. (top left) Second eigenvalue. (top right) Third and fourth eigenvalues; only the reference fourth eigenvalue is shown to illustrate the spectral gap. (bottom left) Relative error in the first spectral gap (i.e., \(1-\lambda_{2}\)). (bottom right) Subspace distance between estimated and reference three-dimensional invariant subspaces.
Figure 5: First five eigenvalues of the transition operator for \(\text{AIB}_{9}\) as a function of lag time. (left) Comparison between eigenvalues computed using the dihedral MSM with 1000 clusters (solid lines) and the inexact subspace iteration (dashed lines). The shading indicates standard deviations over five trained networks for the subspace iteration. (right) Comparison between a dihedral MSM (solid lines) and Cartesian MSMs with 1000 clusters (dashed lines). The standard deviations for the Cartesian MSMs over five random seeds for \(k\)-means clustering are too narrow to be seen.
In (40), we introduce the notation
\[\rho(X)=\sum_{t=0}^{(\tau\wedge T)-1}\Gamma(X^{t}) \tag{41}\]
for use in Algorithm 2.
When the boundary condition is non-zero, as for the committor, we restrict (38) to a \((k-1)\)-dimensional linear system by excluding the indices \(a=1\) and \(b=1\) in (39) and (40) and setting
\[\rho(X)=\varphi_{\theta}^{1}(X^{\tau\wedge T})-\varphi_{\theta}^{1}(X^{0})+ \sum_{t=0}^{(\tau\wedge T)-1}\Gamma(X^{t}). \tag{42}\]
In this case the corresponding approximate solution is
\[u_{\theta}=\varphi_{\theta}^{1}+\sum_{a=2}^{k}w_{a}\varphi_{\theta}^{a}. \tag{43}\]
This approximate solution corresponds to the one given by dynamical Galerkin approximation [10; 11] with the basis \(\{\varphi_{\theta}^{a}\}_{a=2}^{k}\) and a "guess" function of \(\varphi_{\theta}^{1}\).
When the boundary conditions are zero, the orthogonalization procedure and the matrix \(K\) are applied to all basis functions as in Section 7. When the boundary conditions are non-zero, the orthogonalization procedure is only applied to the basis functions \(\{\varphi_{\theta}^{a}\}_{a=2}^{k}\), and \(K_{a1}=I_{a1}\). We summarize our procedure for prediction in Algorithm 2.
### Muller-Brown committor
In this section, we demonstrate the use of our method for prediction by estimating the committor for the Muller-Brown model with a shallow intermediate basin at \((-0.25,0.5)\) (Figure 1). Here the sets \(A\) and \(B\) are defined as in Eq. (24) and \(T\) is the time of first entrance to \(D^{\mathsf{c}}=A\cup B\). In this case, a one-dimensional subspace iteration (i.e., \(k=1\) in Algorithm 2) appears sufficient to accurately solve the prediction problem. Figure 7 shows the largest eigenvalue of the stopped transition operator \(\mathcal{S}^{\tau}\) (the second largest of \(\mathcal{A}\) in (14)) computed from our grid-based reference scheme (Section 6.6.1). Richardson iteration should converge geometrically in this eigenvalue[33], and so, for moderate lag times,
Figure 6: First two non-trivial eigenfunctions of \(\mathrm{AIB}_{9}\) projected onto the first two dPCs (i.e., averaged for bins in the two-dimensional space shown). (top) MSM constructed on sine and cosine of dihedral angles with 1000 clusters and lag time corresponding to 40 ps. (middle) Inexact subspace iteration using Cartesian coordinates and the same lag time. (bottom) MSM constructed on Cartesian coordinates with 1000 clusters and the same lag time.
Figure 7: First eigenvalue of \(\mathcal{S}^{\tau}\) (second of \(\mathcal{A}\) in (14)) for the Muller-Brown model as a function of lag time (in units of \(\delta_{t}^{-1}\)). The gap between this eigenvalue and the dominant eigenvalue, which is one, determines the rate of convergence of the Richardson iteration.
we can expect our method to converge in a few dozen iterations. To initialize the algorithm we choose \(\tilde{\varphi}_{1}^{1}=\mathbb{1}_{B}\). All other hyperparameters are listed in Table I.
We compare the estimate of the committor from our approach with that from an MSM constructed from the same amount of data by using \(k\)-means to cluster the configurations outside \(A\) and \(B\) into \(400\) states and counting the transitions between clusters. In addition to the root mean square error (RMSE) for the committor itself, we show the RMSE of
\[\text{logit}_{\varepsilon}(q)=\log\left(\frac{\varepsilon+q}{1+\varepsilon-q }\right) \tag{44}\]
for points outside \(A\) and \(B\). This function amplifies the importance of values close to zero and one. We include \(\varepsilon\) because we want to assign only a finite penalty if the procedure estimates \(q\) to be exactly zero or one; we use \(\varepsilon=e^{-20}\).
Results as a function of lag time are shown in Figure 8. We see that the Richardson iterate is more accurate than the MSM for all but the shortest lag times. When using the \(L_{\mu}^{2}\) loss, the results are comparable, whereas the softplus loss allows the Richardson iterate to improve the RMSE of the logit function in (44) with no decrease in performance with respect to the RMSE of the committor. Results as a function of the size of the data set are shown in Figure 9 for a fixed lag time of \(\tau=0.1\delta_{t}^{-1}\). The Richardson iterate generally does as well or better than the MSM. Again, the differences are more apparent in the RMSE of the logit function in (44). By that measure, the Richardson iterate obtained with both loss functions is significantly more accurate than the MSM for small numbers of trajectories. The softplus loss maintains an advantage even
for large numbers of trajectories.
Figure 8: Committor prediction for the Müller-Brown system as a function of lag time (in units of \(\delta_{t}^{-1}\)). (left) Comparison of the inexact Richardson iteration using the \(L_{\mu}^{2}\) loss and an MSM with 400 states. (right) Same comparison using the softplus loss in place of the \(L_{\mu}^{2}\) loss.
Figure 9: Committor prediction for the Müller-Brown as a function of number of initial conditions for a fixed lag time of \(\tau=0.1\delta_{t}^{-1}\). (left) Comparison of inexact Richardson iteration using the \(L_{\mu}^{2}\) loss and an MSM with 400 states. (right) Same comparison using the softplus loss in place of the \(L_{\mu}^{2}\) loss.
### Accelerating convergence by incorporating eigenfunctions
As discussed in Section 3, we expect Richardson iteration to converge slowly when the largest eigenvalue of \(\mathcal{S}^{\tau}\), \(\lambda_{1}\), is close to \(1\). More precisely, the number of iterations required to reach convergence should scale with \(-1/\log\lambda_{1}=\mathbb{E}\left[T\right]/\tau\), the mean escape time from the quasi-stationary distribution to the boundary of \(D\) divided by the lag time. With this in mind, we can expect inexact Richardson iteration for the Muller-Brown to perform poorly if we deepen the intermediate basin at \((-0.25,0.5)\) as in Figure 10 (top left). Again, the sets \(A\) and \(B\) are defined as in (24), and \(T\) is the time of first entrance to \(D^{\mathsf{c}}=A\cup B\). In this case, \(-1/\log\lambda_{1}\) is on the order of \(100\) for the lag times we consider and, as expected, inexact Richardson iteration converges slowly (Figure 10, bottom left). Estimates of the committor by inexact Richardson iteration do not reach the correct values even after hundreds of iterations (Figure 10, bottom right).
We now show that convergence can be accelerated dramatically by incorporating additional eigenfunctions of \(\mathcal{S}^{\tau}\) (i.e., \(k>1\) in Algorithm 2). For the Muller-Brown model with a deepened intermediate basin, the second eigenvalue of \(\mathcal{S}^{\tau}\) is roughly \(1\times 10^{-4}\) for a lag time of \(\tau=1000\) steps or \(1\,\delta_{t}^{\,-1}\) (while the first is near one as discussed above). We therefore choose \(k=2\) with \(\tilde{\varphi}_{1}^{2}\) initialized as a random linear combination of coordinate functions as in previous examples. We run the subspace iteration for four iterations, compute the committor as a linear combination of the resulting functions, and then refine this result with a further ten Richardson iterations (i.e., \(k=1\) with the starting vector as the output of the \(k=2\) subspace iteration). To combine the functions, we use a linear solve which incorporates memory (Algorithm 3) [58, 59]. We find that the use of memory improves the data-efficiency substantially for poorly conditioned problems. For our tests here, we use two memory kernels, corresponding to \(\tau_{\text{mem}}=\lfloor\tau/4\rfloor\).
The bottom row of Figure 11 illustrates the idea of the subspace iteration. The second eigenfunction (Figure 11, center) is peaked at the intermediate. As a result, the two neural-network functions linearly combined by the Galerkin approach with memory can yield a good result for the committor (Figure 11, bottom right). Figure 12 compares the RMSE for the committor and the RMSE for the logit in (44) for Algorithm 2 with \(k=1\) (pure Richardson iteration) and \(k=2\) (incorporating the first non-trivial eigenfunction), and an MSM with \(400\) states. We see that the Richardson iteration suffers large errors at all lag times; as noted previously, this error is mainly in the vicinity of the intermediate. The MSM cannot accurately compute the small probabilities, but does as well as the subspace iteration in terms of RMSE.
### \(\text{AIB}_{9}\) prediction results
As an example of prediction in a high-dimensional system, we compute the committor for the transition between the left- and right-handed helices of \(\text{AIB}_{9}\) using the inexact Richardson iteration scheme (\(k=1\) in Algorithm 2) with the softplus loss function. Specifically, for this committor calculation \(T\) is the time of first entrance to \(D^{\mathsf{c}}=A\cup B\) with \(A\) and \(B\) defined in Section 6.2. As before, we initialize \(\tilde{\varphi}_{1}^{2}=\mathbbm{1}_{B}\).
To validate our results, we use the \(5\)\(\mu\)s reference trajectories to compute an empirical committor as a function of the neural network outputs, binned into intervals:
\[\bar{q}(s)=\mathbb{P}\left[X^{T}\in B\Big{|}u_{\theta}(X^{0})\in[s,s+\Delta s ]\right] \tag{45}\]
for \(s\in[0,1-\Delta s]\). Here, we use \(\Delta s=0.05\). The overall error in the committor estimate is defined as
\[q\text{ error}=\left(\Delta s\sum_{n=0}^{1/\Delta s-1}\left[\bar{q}(n\Delta s )-n\Delta s\right]^{2}\right)^{1/2}. \tag{46}\]
While this measure of error can only be used when the data set contains trajectories of long enough duration to reach \(D^{\mathsf{c}}\), it has the advantage that it does not depend on the choice of projection that we use to visualize the results.
Results for the full data set with \(\tau\) corresponding to \(400\) ps are shown in Figure 13. The projection on the principal components is consistent with the symmetry of the system, and the predictions show good agreement with the empirical committors. As \(\tau\) decreases, the results become less accurate (Figure 14, top left); at shorter lag times we would expect further increases in the error.
We also examine the dependence of the results on the size of the data set by subsampling the short trajectories and then training neural networks on the reduced set of trajectories (Figure 14, top right). We find that the performance steadily drops as the number of trajectories is reduced and degrades rapidly (Figure 14, bottom) for the data sets subsampled more than \(20\)-fold, corresponding to about \(7\)\(\mu\)s of total sampling.
Finally, we compute the MFPT to reach the right-handed helix using the same data set. For the MFPT calculation \(T\) is the time of first entrance to \(D^{\mathfrak{c}}=B\). Note that the time of first entrance to \(B\) includes
Figure 10: Richardson iteration for the committor converges slowly for a Müller-Brown potential with a deepened intermediate. (top left) Potential energy surface, with states \(A\) and \(B\) indicated. Contour lines are drawn every 1 \(\beta^{-1}\). (top right) Reference committor. (bottom left) Dominant eigenvalue as a function of lag time (in units of \(\delta_{t}^{-1}\)) from an MSM with 400 states, subspace iteration, and the grid-based reference. (bottom right) Example of the Richardson iteration after 400 iterations. Note the overfitting artifacts and lack of convergence near the intermediate state.
long dwell times in \(A\) and is expected to be much larger than the time of first entrance to \(A\cup B\).
We compare against an empirical estimate of the MFPT defined by
\[\bar{m}(s)=\operatorname{\mathds{E}}\Big{[}T\Big{|}u_{\theta}(X^{0})\in[s,s+ \Delta s]\Big{]} \tag{47}\]
for \(s\in[0,m_{\max}-\Delta s]\) where \(\Delta s=3\) and \(m_{\max}=57\) ns. Overall error is defined analogously to Eq. (46).
In Figure 15, we show the MFPT obtained from Algorithm 2 with \(k=1\) (pure Richardson iteration) and \(k=5\). Both calculations use the \(L_{\mu}^{2}\) loss function. Initially we set \(\tilde{\varphi}_{1}^{1}\) equal to an arbitrary positive function (we use \(51_{A}\)) and \(\tilde{\varphi}_{s}^{a}\) for \(a>1\) to a random linear combination of coordinate functions. The horizontal line in Figure 15 indicates a MFPT of about 56 ns estimated from the long reference trajectories. We see that the algorithm with \(k=5\) converges much faster (note the differing scales of the horizontal axes) and yields accurate results at all lag times other than the shortest shown. The need to choose \(k>1\) for this MFPT calculation is again consistent with theoretical convergence behavior of exact subspace iteration. Because the typical time of first entrance to \(B\) from points in \(A\) is very large, we expect the dominant eigenvalue of \(\mathcal{S}^{\tau}\) to be very near to one when \(D=B^{\epsilon}\). In contrast, the committor calculation benefits from the fact that the time of first entrance to \(A\cup B\) is much shorter, im
Figure 11: Illustration of the subspace iteration for the Müller-Brown committor. (top left) Modified Müller-Brown potential. (top center) Reference second eigenfunction. (top right) Reference committor. (bottom left) Neural-network Richardson iterate after four iterations. (bottom center) First non-dominant eigenfunction obtained from the neural network after four iterations. (bottom right) Committor resulting from linear combination of the Richardson iterate and second eigenfunction. Results shown are for \(\tau=1000\) steps (i.e., \(1_{\delta_{t}}^{-1}\)).
Figure 12: Committor for the Müller-Brown potential with deepened intermediate as a function of lag time (in units of \(\delta_{t}^{-1}\)). (left) Comparison of RMSE for subspace iteration as described above, Richardson iteration (as in Section 8.1 but instead with 500 subspace iterations), and an MSM with 400 states. (right) RMSE of the logit function in (44).
plying a smaller dominant eigenvalue of \(\mathcal{S}^{\tau}\) when \(D=\left(A\cup B\right)^{\mathsf{c}}\).
## IX Conclusions
In this work we have presented a method for spectral estimation and rare-event prediction from short-trajectory data. The key idea is that we use the data as the basis for an inexact subspace iteration. For the test systems that we considered, the method not only outperforms high-resolution MSMs, but it can be tuned through the choice of loss function to compute committor probabilities accurately near the reactants, transition states, and products. Other than the Markov assumption, our method requires no knowledge of the underlying model and puts no restrictions on its dynamics.
As discussed in prior neural-network based prediction work[24, 30], our method is sensitive to the quality and distribution of the initial sampling data. However, our work shares with Ref. [24] the major advantage of allowing the use of arbitrary inner products. This enables adaptive sampling of the state space [24, 60] and--together with the features described above--the application to observational and experimental data, for which the stationary distribution is generally unavailable.
In the present work, we focused on statistics of transition operators, but our method can readily be extended to solve problems involving their adjoint operators as well. By this means, we can obtain the stationary distribution as well as the backward committor. The combination of forward and backward
Figure 14: AIB\({}_{9}\) committor for the transition between left- and right-handed helices, as functions of lag time (in ps) and number of initial conditions. (top left) Error in the committor as a function of lag time (in ps). Shading indicates the standard deviation over ten different initializations of the neural-network parameters. (top right) Error in the committor as a function of the number of initial conditions with \(\tau\) corresponding to 160 ps. Shading indicates the standard deviation over ten different random samples of the trajectories. (bottom) Comparison between empirical committors and neural-network committors trained on data sets with (left) 1/2 and (right) 1/20 of the short trajectories. Error bars indicate standard deviations over ten random samples of the trajectories.
Figure 13: AIB\({}_{9}\) committor for the transition between left- and right-handed helices. (left) Averages of \(\mathbb{1}_{B}(X^{T})\) for initial conditions in bins in the first two dPCs computed from 20 long (5 \(\mu\)s) trajectories. (middle) Averages of representative neural-network committors trained on the data set of 6,910 short (20 ns) trajectories; \(\tau\) corresponds to 400 ps. (right) Comparison between empirical committors (as defined in (45)) and the neural-network committors (trained as for the middle panel). Error bars indicate standard deviations over ten different initializations of the neural-network parameters.
predictions allows the analysis of transition paths using transition path theory without needing to generate full transition paths[37, 38, 48] and has been used to understand rare transition events in molecular dynamics[10, 13, 16, 17, 61, 62] and geophysical flows[63, 64, 65, 66, 67]. We leave these extensions to future work.
In cases in which trajectories that reach the reactant and product states are available, it would be interesting to compare our inexact iterative schemes against schemes for committor approximation based on logistic regression and related approaches [35, 41
Figure 16: \(\text{AIB}_{9}\) MFPT to the right-handed helix. (left) Averages of the time to next reach \(B\) for initial conditions in bins in the first two dPCs computed from 20 long (5 \(\mu\)s) trajectories. (middle) Averages of representative neural-network committors trained on the data set of 6,910 short (20 ns) trajectories; \(\tau\) corresponds to 400 ps. (right) Comparison between empirical committors (as defined in (47)) and the neural-network committors (trained as for the middle panel). Error bars indicate standard deviations over ten different initializations of the neural-network parameters.
Figure 15: MFPT between left- and right-handed helices for the \(\text{AIB}_{9}\) system. (top left) Convergence of Richardson iteration. (top right) Convergence of a five-dimensional subspace iteration. (bottom left) MFPT after 100 and 200 subspace iterations as a function of lag time. Shading indicates standard deviations over ten different initializations of the neural-network parameters. (bottom right) Overall error in MFPT.
46, 68]. These schemes are closely related to what is called "Monte-Carlo" approximation in reinforcement learning [31], and also to the inexact Richardson iteration that we propose here with \(\tau\to\infty\).
We have seen that temporal difference (TD) learning, a workhorse for prediction in reinforcement learning, is closely related to an inexact form of Richardson iteration. Variants like TD(\(\lambda\)), have similar relationships to inexact iterative schemes. As we showed, subspace iteration is a natural way of addressing slow convergence. We thus anticipate that our results have implications for reinforcement learning, particularly in scenarios in which the value function depends on the occurrence of some rare-event. Finally, we note that our framework can be extended to the wider range of iterative numerical linear algebra algorithms. In particular, Krylov or block Krylov subspace methods may offer further acceleration. In fact, very recently an approach along these lines was introduced for value function estimation in reinforcement learning[69].
## Author's contributions
J.S., S.C.G., A.R.D., and J.W. conceptualized research. J.S. developed the method. C.L. adapted the linear solve used for prediction from Refs. 58 and 59. J.S. and S.C.G. performed the numerical tests. A.R.D. and J.W. supervised the research. All authors wrote and edited the manuscript.
###### Acknowledgements.
We are grateful to Arnaud Doucet for pointing our attention to TD methods and the inexact power iteration in Ref. 32, both of which were key motivations for this work. We also thank Joan Bruna for helpful conversations about reinforcement learning. This work was supported by National Institutes of Health award R35 GM136381 and National Science Foundation award DMS-2054306. S.C.G. acknowledges support by the National Science Foundation Graduate Research Fellowship under Grant No. 2140001. J.W. acknowledges support from the Advanced Scientific Computing Research Program within the DOE Office of Science through award DE-SC0020427. This work was completed in part with resources provided by the University of Chicago Research Computing Center, and we are grateful for their assistance with the calculations. "Beagle-3: A Shared GPU Cluster for Biomolecular Sciences" is supported by the National Institute of Health (NIH) under the High-End Instrumentation (HEI) grant program award 1S10OD028655-0.
## Data availability statement
The data that supports the findings of this study are available within the article. Code implementing the algorithms and a Jupyter notebook illustrating use of the method on the Muller-Brown example are available at [https://github.com/dinner-group/inexact-subspace-iteration](https://github.com/dinner-group/inexact-subspace-iteration).
|
2305.14585 | Faithful and Efficient Explanations for Neural Networks via Neural
Tangent Kernel Surrogate Models | A recent trend in explainable AI research has focused on surrogate modeling,
where neural networks are approximated as simpler ML algorithms such as kernel
machines. A second trend has been to utilize kernel functions in various
explain-by-example or data attribution tasks. In this work, we combine these
two trends to analyze approximate empirical neural tangent kernels (eNTK) for
data attribution. Approximation is critical for eNTK analysis due to the high
computational cost to compute the eNTK. We define new approximate eNTK and
perform novel analysis on how well the resulting kernel machine surrogate
models correlate with the underlying neural network. We introduce two new
random projection variants of approximate eNTK which allow users to tune the
time and memory complexity of their calculation. We conclude that kernel
machines using approximate neural tangent kernel as the kernel function are
effective surrogate models, with the introduced trace NTK the most consistent
performer. Open source software allowing users to efficiently calculate kernel
functions in the PyTorch framework is available
(https://github.com/pnnl/projection\_ntk). | Andrew Engel, Zhichao Wang, Natalie S. Frank, Ioana Dumitriu, Sutanay Choudhury, Anand Sarwate, Tony Chiang | 2023-05-23T23:51:53Z | http://arxiv.org/abs/2305.14585v5 | # Robust Explanations for Deep Neural Networks via Pseudo Neural Tangent Kernel Surrogate Models
###### Abstract
One of the ways recent progress has been made on explainable AI has been via explain-by-example strategies, specifically, through data attribution tasks. The feature spaces used to attribute decisions to training data, however, have not been compared against one another as to whether they form a truly representative surrogate model of the neural network (NN). Here, we demonstrate the efficacy of surrogate linear feature spaces to neural networks through two means: (1) we establish that a normalized psuedo neural tangent kernel (pNTK) is more correlated to the neural network decision functions than embedding based and influence based alternatives in both computer vision and large language model architectures; (2) we show that the attributions created from the normalized \(\mathrm{pNTK}\) more accurately select perturbed training data in a data poisoning attribution task than these alternatives. Based on these observations, we conclude that kernel linear models are effective surrogate models across multiple classification architectures and that pNTK-based kernels are the most appropriate surrogate feature space of all kernels studied.
## 1 Introduction
Explainability remains a critical open problem for deep neural networks and the applications built upon them [23]. Explain-by-example [54; 35] has emerged as a major technique for generating explanations; this technique works by attributing decisions to specific training datapoints then visualizing the highest-attributed training data as exemplars of model behavior [21]. Kernel functions measure the similarity between individual data points via an inner product in a given feature space [2]. If a space can be found where higher-similarity datapoints are more influential on the decision making process, then we can use the kernel functions for the data attribution task. Thus, kernel-based approaches have become a natural choice for data attribution tasks and a wide range of feature spaces have been studied in literature [21; 1; 38; 27]; however, each feature space represents a different approximation of the neural network, so their comparative evaluation is a major topic of study [16]. In this work, we propose a methodology that achieves a higher-quality of approximation over the state-of-the-art methods and subsequently improves the performance for data attribution-based explanation-by-example tasks.
We measure the quality of a kernel-based feature representation by assessing how well a kernel generalized linear model (kGLM) [18] approximates the softmax probabilities of the original neural network using a rank-correlation score [38]. A faithful reconstruction of the softmax probabilities
confirms the kernel model's ability to serve as an accurate surrogate model to the neural network, so we can use the learned weighted attributions from our analytically simpler kGLM as the attributions for the complex neural network. Recent research on the neural tangent kernel (NTK) [20] shows that kernel models computed from finite-width architectures achieve comparable performance to the neural network itself [27; 49]; however, computing the full NTK is computationally intensive [33]. Approximations of the full NTK such as TRAK [38] seek to alleviate this problem by projecting the feature space of the NTK down to a smaller dimension at the cost of requiring many model instances to achieve a high-fidelity surrogate model. In contrast, we utilize an approximation referred to as the \(\mathrm{pNTK}\)[31] that allows us to analyze a single trained model instance with a single kernel [27]. Considering that it is common to use a single model instance with pre-trained weights (e.g., foundation models [7]), our approach brings in dual benefits: 1) we can explain the single model instance rather than the class of models that share the same architecture. Given the known variation between model instances [32], it is possible that different instances need different explanations. 2) Kernels for the most widely used foundation model instances could be pre-computed and disseminated to the community for analysis.
#### Contributions
We make three major contributions in this work:
1. We establish that kGLM based on a normalized pseudo Neural Tangent Kernel (pNTK) provides an approximation of deep neural networks including both computer vision and large language models.
2. We use the correlation between the kernel surrogate model and the neural network to evaluate different kernel functions as a feature space and demonstrate the superiority of pNTK-based kGLM approach.
3. We demonstrate that a pNTK-based kGLM offers higher performance for explanation-by-example tasks. We design an adversarial learning experiment and demonstrate that our approach can retrieve poisoned data samples with higher accuracy than competing methods.
#### Related Work
Surrogate models for explaining model behavior.Recent work in explainable AI has focused on determining when neural networks are exactly equivalent to other common ML algorithms [25; 5; 45], including kernel machines [20; 12]. It has been shown that infinitely wide neural networks are equivalent to a kernel machine trained with feature space chosen as the neural tangent kernel [20]. These infinitely wide models, however, do not replicate the feature learning behavior seen in finite-width networks [9; 55; 50]. Subsequently, researchers turned to investigating properties of finite-width models with NTK computed at various checkpoints and/or after training. This framework was used to explore inductive biases [34], feature learning [43], learning dynamics [14; 4], and adversarial robustness [47; 28]. Critically, kernel support vector machines (via the empirical NTK calculated after training) were also shown to achieve the same test accuracy as the underlying NN [4; 27; 49]. Our work extends this last thread by evaluating whether kGLMs approximate the underlying neural network itself.
Kernels for attribution.Kernel functions with different feature spaces have been proposed to explain the behavior of a neural network. Influence functions were introduced to perform data attribution [21], but require computing and inverting the Hessian, which is computationally expensive. As such, a scalable approximation (TraceIn [42; 1]) has since been developed. Other works have investigated data attribution via the "data counterfactual" task which predicts the behavior of a class of model when a subset of training data is removed from the dataset as a measure of the importance of those training data to the model [19]. Park et al. [38] (hereafter Trak ) showed that many instances of models could be combined with a scalable NTK approximation to perform the data counterfactual task. We report performance comparison with Trak [38] and TraceIn [1] as a choice of kernel space and demonstrate the superiority of the \(\mathrm{pNTK}\)-based approach in our single model instance regime.
Evaluating kernel attribution.In this paper, we use two evaluation strategies. The first focuses on evaluating the accuracy of the surrogate model, while the second evaluates its performance on data-attribution tasks. For the former, our primary evaluation methodology uses rank-based correlation,
which was suggested in contemporaneous work for evaluating data counterfactual models [38]. For the data attribution task, we follow the methodology in [46] to evaluate the model via precision and recall in tracing decisions on poisoned test data back to poisoned training data. Previous work on kernel evaluation on data attribution did not use a surrogate model-based approach; instead, they relied upon whether the attributions trace to training data of the correct class [16]. We believe that the surrogate model framework provides a superior approach because the surrogate model approximates the neural network behavior, e.g., predicting both the same correct or incorrect class as the neural network point for point.
## 2 Background
Notations.We use boldface text to denote vectors (lower case) and matrices (upper case). The Hadamard (element-wise) product is written as \(\odot\), and unless indicated otherwise, the norm \(\left\|\cdot\right\|\) is the Euclidean norm.
Neural networks.We consider the supervised classification problem where a neural network denoted as \(F(\mathbf{x}\,;\mathbf{\theta})\) parameterized by the vector \(\mathbf{\theta}\) is learned via iterative first-order optimization to minimize the cross entropy loss function between a target label vector and softmax probability vector evaluated on a set of data inputs \(\mathbf{x}\) and labels \(\mathbf{z}\subseteq\mathbb{R}^{C}\). We represent the input as the matrix \(\mathbf{X}=[\mathbf{x}_{1},\dots,\mathbf{x}_{N}]\in\mathbb{R}^{N\times p}\), with \(p\) being the original dimensionality of inputs, and N is the number of training samples. We denote the \(c\)-th scalar output (i.e., the \(c\)-th output neuron) of the network by \(F^{c}\). The output vector of the neural network (i.e., the final activations) are fed to the softmax function \(\sigma\):
\[\sigma(F(\mathbf{x}\,;\mathbf{\theta}))=\frac{\exp(F^{c}(\mathbf{x}\,;\mathbf{\theta}))}{\sum_ {c=1}^{C}\exp(F^{c}(\mathbf{x}\,;\mathbf{\theta}))}.\]
The output of the softmax function has property \(\sum_{c=1}^{C}\sigma(F^{c}(\mathbf{x}\,;\mathbf{\theta}))=1\), so we interpret it as the probability mass function where \(\sigma(F^{c}(\mathbf{x}\,;\mathbf{\theta}))^{c}\) represents the probability of class c. For all the analysis in this paper, we evaluate the network function and calculate kernels at the final trained weight vector.
Kernel methods.Kernel methods implicitly map the data vector \(\mathbf{x}\) to a feature vector \(\rho(\mathbf{x})\) in a higher dimensional Hilbert space \(\mathcal{V}\) for which the kernel function \(\mathbf{\kappa}(\mathbf{\cdot},\mathbf{\cdot})\) evaluates the inner product in \(\mathcal{V}\). With some abuse of notation, we will write \(\mathbf{\kappa}(\mathbf{x},\mathbf{X})\in\mathbb{R}^{N}\) for the vector whose \(j\)-th entry is \(\mathbf{\kappa}(\mathbf{x},\mathbf{x}_{j})\) and \(\mathbf{\kappa}(\mathbf{X},\mathbf{X})\in\mathbb{R}^{N\times N}\) for the matrix whose \((i,j)\)-th entry is \(\mathbf{\kappa}(\mathbf{x}_{i},\mathbf{x}_{j})\).
## 3 Kernel-based surrogate models
Given an input \(\mathbf{x}\) and a neural network \(F(\mathbf{x}\,;\mathbf{\theta})\) as defined above, our surrogate modeling goal is to learn a kernel-based approximation function denoted as \(\mathrm{kGLM}(\mathbf{x})\) such that \(\sigma(\mathrm{kGLM}(\mathbf{x}))=\sigma(F(\mathbf{x},\mathbf{\theta}))\). In the following sub-sections, we first describe the framework to learn \(\mathrm{kGLM}\left(\cdot\right)\) in Section 3.1. We introduce pNTK and other kernel functions in Sections 3.2 and 4.3. Finally, we introduce the functions for evaluating the approximation quality of \(\mathrm{kGLM}\left(\cdot\right)\) in Section 3.3.
### Kernel general linear models
Given an input sample \(\mathbf{x}\), we define the approximation provided by kGLM as \(\mathrm{kGLM}(\mathbf{x})=\mathbf{W}\mathbf{\kappa}(\mathbf{x},\mathbf{X})+\mathbf{b}\), where \(\mathbf{W}\in\mathbb{R}^{C\times N}\) is a learnable weight matrix, \(\mathbf{\kappa}\) is the kernel function, and \(\mathbf{b}\in\mathbb{R}^{C}\) is a learnable bias vector.
We make decisions from kGLM by mapping the final activations computed to softmax confidences, just as we do for a standard neural network-based classification task. The parameters \(\mathbf{W}\) and \(\mathbf{b}\) are iteratively updated using an optimizer to minimize the cross entropy loss on the same training dataset upon which the neural network is trained. This setup extends to multi-classification tasks as well; the decision function for multi-classification tasks is commonly computed from the softmax vector, so it is sufficient that the softmax vector computed from the kGLM is equal to the softmax vector computed from the neural network.
For any choice of \(\mathbf{\kappa}\) and trained kGLM, the attribution for decision class \(c\) for a test input \(\mathbf{x}\) to a training datapoint \(\mathbf{x}_{i}\) is given by \(A(\mathbf{x},\mathbf{x}_{i})^{c}=\mathbf{W}_{c,i}\ \mathbf{\kappa}(\mathbf{x},\mathbf{x}_{i})+\frac{\mathbf{b}_{c}}{N}\). The \(\frac{\mathbf{b}_{c}}{N}\) term is necessary to ensure that the sum over the attributions for the entire training dataset is equal to the kGLM's final activation for class \(c\): \(\sum_{i=1}^{N}A(\mathbf{x},\mathbf{x}_{i})^{c}=\mathrm{kGLM}(\mathbf{x},\mathbf{X})^{c}\). Note that if the kGLM is a perfect surrogate model, the softmax function applied to the vector created from each class attribution will equal the neural network confidence in each class. Consequently, we will have decomposed the reasoning for the neural network's specific confidence in each class to a linear combination of similarities between \(\mathbf{x}\) and each training datapoint in \(\mathbf{X}\). The next sub-section describes the kernel function used to compute this similarity measure. It is _critically important_ for the similarity measure to be model-centric. For example, given two images \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), the measure should capture the similarity of their respective classification by accounting for \(\mathbf{\theta}\) rather than simply raw data level similarity between the images.
### Psuedo neural tangent kernel
We define the kernel function \(\mathrm{pNTK}\) as follows: for any data samples \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), define the Jacobian of the neural network with respect to \(\mathbf{\theta}\) at datapoint \(\mathbf{x}_{i}\) by
\[\mathbf{G}(\mathbf{x}_{i};\mathbf{\theta})=\sum_{c=1}^{C}\nabla_{\mathbf{\theta}}F^{c}(\mathbf{x} _{i};\mathbf{\theta}). \tag{1}\]
Then the \(\mathrm{pNTK}\) evaluated at datapoints \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) is given by
\[\mathrm{pNTK}(\mathbf{x}_{i},\mathbf{x}_{j})=\frac{\langle\mathbf{G}(\mathbf{x}_{i};\mathbf{ \theta}),\mathbf{G}(\mathbf{x}_{j};\mathbf{\theta})\rangle}{\|\mathbf{G}(\mathbf{x}_{i};\mathbf{\theta })\|\|\mathbf{G}(\mathbf{x}_{j};\mathbf{\theta})\|}. \tag{2}\]
We provide more details about these definitions in Appendix B, where we also compare the \(\mathrm{pNTK}\) to the limiting NTK. We explore the intuition behind this choice of kernel in Figure 1. The normalization in the denominator of Eq. (2) makes the \(\mathrm{pNTK}\) a kernel of cosine-similarity values. It has been suggested that this normalization helps smooth out kernel mass over the entire training dataset [1].
### Evaluating the fidelity of surrogate models
Finally, we introduce the measures to evaluate approximation quality of the surrogate model. Specifically, we elaborate on our use of the Kendall-\(\tau\) rank correlation measure to model the pair-wise
Figure 1: **Geometric intuition behind the pNTK.** A neural network function is evaluated at two points creating surfaces \(F(\mathbf{x}_{i}\,;\mathbf{\theta})\) and \(F(\mathbf{x}_{j}\,;\mathbf{\theta})\). These surfaces are shown with a tangent hyper plane at the same point (\(\mathbf{\theta}\)) in parameter space coinciding with the end of training. The Jacobian vector defines the tangent hyperplane’s orientation in parameter space. The \(\mathrm{pNTK}\) is a kernel whose \((i,j)\)-th element is the cosine angle between averaged Jacobian vectors. The more similar the local geometry between \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\), the higher the value of \(\mathrm{pNTK}_{i,j}\).
relationship between the softmax outputs returned by the neural network and it's surrogate model on a sample-by-sample basis.
Kendall-\(\tau\) rank correlation.For a sequence of observations \(S_{\tau}=\{(x_{1},y_{1}),\ldots,(x_{N},y_{N})\}\) we say that a pair \((x_{i},y_{i})\) and \((x_{j},y_{j})\) with \(i\neq j\) are concordant if either both \(x_{i}>x_{j}\) and \(y_{i}>y_{j}\) or \(x_{i}<x_{j}\) and \(y_{i}<y_{j}\). Otherwise, the pair is discordant. We count the total number of concordant and discordant pairs, respectively denoted as \(\mathrm{NC}\) and \(\mathrm{ND}\). Then, Kendall-\(\tau\) (hereafter just \(\tau\)) is defined: \(\tau(S_{\tau})=\frac{(\mathrm{NC}-\mathrm{ND})}{\mathrm{NC}+\mathrm{ND}}\).
To assess the viability of a surrogate model, we compute the \(\tau\) between the softmax probability of the neuron representing the correct class, \(\sigma(F(\mathbf{x}\,;\mathbf{\theta})^{c})\), and the kGLM softmax probability for the output representing the correct class, \(\sigma(\mathrm{kGLM}(\mathbf{x}))^{c})\).
Kendall-\(\tau\) is used for a few reasons. First, \(\tau\) has a range \([-1,1]\) with \(\pm\) 1 representing a monotonic relationship and a value of 0 representing no correlation. Furthermore, if the relationship between the kGLM and neural network is strictly monotonic, then an invertible mapping function exists between the kGLM softmax probabilities and the neural network's [6]. In Appendix G, we demonstrate how to find these mapping functions with iterative optimizers [48]. Fitting and then applying these invertible mapping functions makes the kGLM linearly correlated to the neural network. Finally, rank-correlations have been suggested by contemporaneous groups [38], making this choice consistent across our methodologies.
Test accuracy differential (TAD).Previous works evaluated whether kernel models could replicate the performance of underlying neural networks [3; 26; 27; 49]. Having the same test accuracy as the underlying neural network is a necessary but insufficient condition of being equivalent with the neural network, which gives us a preference for \(\tau\). Nonetheless, we track the test accuracy differential, or TAD, as given by \(\mathrm{TestAcc}_{\mathrm{kGLM}}-\mathrm{TestAcc}_{\mathrm{NN}}\) to demonstrate that kGLM have similar performance to the underlying neural network. A value of \(0\) is preferred.
## 4 Experiments
We pursue the following research questions:
1. Does the pNTK-based \(\mathrm{kGLM}\) provide a robust approximation of the decisions by the target neural network (Section 4.2)?
2. How does the pNTK-based \(\mathrm{kGLM}\) compare with other kernel functions? We study the correlation between surrogate models and their underlying neural networks in Section 4.3.
3. How does the pNTK-based \(\mathrm{kGLM}\) compare with other kernel functions on data-attribution tasks? We use a well-established poisoned dataset and evaluate each surrogate method based on their ability to retrieve the perturbed samples in Section 4.4.
### Experimental setup
Classification neural networks with architectures and datasets (MNIST [24], FMNIST [53], CIFAR [22], and COLA [51]) shown in Table 1 are trained using standard techniques. Details such as specifics of architecture and choice of hyperparameters are available in Appendix F. Models that have a value of more than 1 in the column '# Models' in Table 1 are trained multiple times with different seeds. The ResNet18 [17], ResNet34, and MobileNetV2 [44] models were trained by an independent research group with weights downloaded from an online repository [41]. Bertbase [11] weights were downloaded from the HuggingFace [52] repository then transferred onto the COLA dataset, as is common practice for foundation models [7]. After training, we calculate the pNTK and alternative kernels using PyTorch and automatic differentiation [39]. We train a kGLM (sklearn.SGDclassifier) [40] for each \(\mathbf{\kappa}\) using the same training dataset for training the neural network model.
### Robust surrogate modeling via pNTK
We calculate the correlation between the surrogate model and underlying neural network via \(\tau\) and report the results in Table 1. Across each experiment, we observed that the kGLM with choice
of \(\mathbf{\kappa}=\mathrm{pNTK}\) achieves similar performance as the underlying neural network. We find that the efficacy of our surrogate model as measured by the correlation to the neural network changes depending on architecture and dataset; remarkably, \(\tau\) is consistently high with a lower bound value of 0.7 across all experiments indicating relatively high fidelity. We measure a statistically significant decrease in the test accuracy when using the kGLM compared to the neural network across all but one of the experiments. The largest of these decreases in performance is only 2.2%. To visualize whether we can achieve a point-for-point linear realization of the neural network, we learn a non-linear mapping from the kGLM to the neural network (Figure 2 for Bert-base while other visualizations and details available in Appendix G).
### Comparison of surrogate model viability between kernels
For ResNet18 and Bert-base models, we evaluate our choice of \(\mathrm{pNTK}\) against alternatives described below, reporting \(\tau\) and test accuracy differential in Table 2.
The formal definition of each alternative is available in Appendix A and we briefly introduce them here. Trak [38] is computed from the gradients similar to the NTK, except that the authors introduce a random projection matrix over the parameter dimension to speed up the computation. TraceIn [1] is an approximation to the influence function [21] that has become popular for data attribution; TraceIn is calculated from the gradients of the loss function, and therefore relies on the availability of the test label. To evaluate the effect of our normalization in the \(\mathrm{pNTK}\), we compare to the unnormalized pNTK [31] (\(\mathrm{pNTK}^{0}\)), which is simply the numerator of (2). The Conjugate Kernel (CK) is the kernel
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Model (Dataset) & \# Models & NN test acc (\%) & TAD (\%) & \(\tau\) \\ \hline MLP (MNIST2) & 100 & 99.64(1) & +0.03(5) & 0.708(3) \\ CNN (MNIST2) & 100 & 98.4(1) & -0.2(2) & 0.857(7) \\ CNN (CIFAR2) & 100 & 94.94(5) & -2.1(5) & 0.711(3) \\ CNN (FMNIST2) & 100 & 97.95(4) & -2.2(2) & 0.882(3) \\ ResNet18 (CIFAR10) & 1 & 93.07 & -0.28 & 0.776 \\ ResNet34 (CIFAR10) & 1 & 93.33 & -0.29 & 0.786 \\ MobileNetV2 (CIFAR10) & 1 & 93.91 & -0.4 & 0.700 \\ BERT-base (COLA) & 4 & 83.4(1) & -0.1(3) & 0.78(2) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Choice of \(\mathbf{\kappa}=\mathbf{pNTK}\) robustly forms a viable surrogate model of underlying neural network.** We report the leading digit of error as a parenthetical, when available. We perform each experiment with ‘# Models’ independent seeds. For each model and dataset we train and extract the \(\mathrm{pNTK}\), train a kGLM, then calculate and report the \(\tau\) correlation between the kGLM softmax probability and neural network softmax probability for the correct class. The neural network test accuracy column shows that training terminates with a highly performant model, and the test accuracy differential (TAD) column demonstrates that the kGLM achieve comparable performance to the underlying neural network.
Figure 2: **Linear Realization of Bert-base Model.** Each panel shows an independent linearization of a Bert-base transfer model. An invertible mapping is fit between the kGLM and neural network to transform the kGLM’s final activations to the neural network’s, described in Appendix G. The coefficient-of-determination (or \(R^{2}\)) is used to assess the goodness-of-fit; with values near 1, the linearized kGLM is a good approximation for the neural network itself.
created from the Gram matrix of the embedding computed by the last hidden layer [13]. Finally, the Embedding Kernel [1] (Em) is computed from the normalized sum of hidden layer embeddings and forms an apt comparison when considering the widespread use of frozen weight embeddings, e.g., in transfer learning paradigms [10].
We find that our choice of the normalized \(\mathrm{pNTK}\) outperforms these comparison models. Because the \(\mathrm{pNTK}\) technique achieves the highest \(\tau\) across each experiment we argue it forms the best feature space representation of the kGLM. Furthermore, because we observe high correlation across many models and datasets, we believe this methodology forms a robust approximation to the supervised classification network's decision functions. Finally, we visualize the distribution of attributions for a single test input for the ResNet18 model (Figure 3). The specific point visualized is incorrectly classified by the neural network as well as each of the \(\mathrm{pNTK}\), \(\mathrm{pNTK}^{0}\), embedding, and conjugate kernel surrogate models, with the same class prediction as the neural network.
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline Exp Name & Metric & \multicolumn{6}{c}{\(\boldsymbol{\kappa}\)} \\ \cline{3-8} & & \(\mathrm{pNTK}\) & \(\mathrm{pNTK}^{0}\) & \(\mathrm{Trak}\) & \(\mathrm{TraceIn}\) & CK & Em \\ \hline \multirow{2}{*}{ResNet18} & \(\tau\) & **0.776** & 0.662 & -0.143 & 0.374 & 0.632 & 0.763 \\ & TAD (\%) & -0.3 & -0.52 & -83.07 & +6.66 & -0.32 & **-0.20** \\ \hline \multirow{2}{*}{Bert-base} & \(\tau\) & **0.78(2)** & 0.6(1) & 0.368(7) & 0.74(2) & 0.32(9) & 0.74(2) \\ & TAD (\%) & **+0.1(2)** & +0.6(1) & +16.54(8) & +4.9(9) & **-0.1(1)** & **-0.3(4)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparison across surrogate feature spaces.** If available, we report leading digit of error as a parenthetical. Across benchmark models in both CV and NLP, we observe that the \(\mathrm{pNTK}\) forms surrogate models with the highest correlation to the underlying neural network decision function (measured by \(\tau\)) and is furthermore consistent in replicating the performance of these networks (TAD nearly 0). In contrast, the popular influence-based alternative Traceln outperforms the neural network, despite being a linear model, and contemporary work Trak is no better than random guessing for ResNet18, or outperforms when computed with 4 model instances on Bert-base.
Figure 3: **Utilizing the structure of the linear surrogate model to visualize neural network model decision making. Using modified boxplots we compare the distribution of attributions computed from each kernel. The sum of the points in each distribution equals the kGLM activation for that class; therefore, the mean value (black bar) is a good measure of the confidence that the surrogate model will have in that class (see Appendix H for details). For the specific test image shown (left), the neural network misclassifies the image as a horse when its true class is plane. Good surrogate models should similarly misclassify this point, which we observe for the \(\mathrm{pNTK}\), \(\mathrm{pNTK}^{0}\), Embedding and CK.**
### pNTK forms robust attributions in data poisoning regime
Next, we train a 21-layer CNN (details available in Appendix F.1.5) using BadNet CIFAR10 data [15; 46]. We perturb training data by placing a yellow square in the image and modify the label of these perturbed images to a target label (see Figure 4). At test time, perturbed data tricks the model into producing labels of the targeted class. We train a model on this poisoned dataset, compute each kernel function, and produce data attributions. We compare models with precision and recall (see Appendix D) of recognizing poisoned test data by attributing affected decisions correctly to poisoned training data (voted on by a five member committee formed by the highest attributed training data exemplars), and by computing \(\tau\) between the surrogate model and the poisoned model. This experimental procedure is similar to the evaluation used in Shan et al. [46]; there, the authors specifically called for evaluating different choices of kernel for this task, which we now complete.
We report the results in Table 3, and a comparison of attributions from each kernel is given in Figure 4 with more visualizations available in Appendix I. We find that both normalized and unnormalized \(\mathrm{pNTK}\) attributions perform best on precision and recall, but the normalized \(\mathrm{pNTK}\) continues to be most correlated to the neural network. In Figure 4 we visualize the top few exemplars we used in our committees for determining perturbed test data points.
## 5 Summary and conclusions
Impact of linear surrogate modeling for explainability.We have shown evidence supporting the choice of the \(\mathrm{pNTK}\) as a robust feature space for a kernel surrogate model. Our choice of a linear surrogate model model allows us to separate the terms from each training datapoint to form an attribution (Section 3.1). Note that while the choice of the linear model implies every training datapoint will contribute some weight to the decision, we observed that the highest attributed images from the \(\mathrm{pNTK}\) have relatively small mass compared to the bulk contribution (see Figure 3), suggesting that the central tendency of the distribution, rather than a few exemplar outliers, are the main source of decision making. We contrast this to visualizing the few most similar datapoints for attribution (as in Figure 4). Presenting the top most similar training images without the context of how much those images actually influence the decision compared to the bulk of training images may be misleading to human interpreters.
Contrast with alternative kernels for surrogate models.Our results consistently showed that the TraceIn kernel generates a better-performing model than the neural network itself but was typically less correlated to the model than the \(\mathrm{pNTK}\), CK, or Embedding kernels. It is worth re-emphasizing that the TraceIn function utilizes the test labels in the computation of attribution. We believe this results in high attributions to training data of the leaked correct class. When attempting to explain the neural network behavior, we believe it's better to have a surrogate model that consistently replicates the neural network decisions rather than optimizing for accuracy. We also observed that Trak did not perform well as a feature space. This is likely because Trak was designed to be calculated with many model instances, whereas we study a single model instance paradigm. Trak's performance in this regime was observed by the authors of the original paper (cf. Figure E.1 in [38]). Because we focus on different regimes and use cases, we view our works as largely complimentary of each other.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Method & Precision (\%) & Recall (\%) & \(\tau\) & TAD (\%) & poi. \(\tau\) & poi. TAD(\%) \\ \hline \(\mathrm{pNTK}\) & 99.99 & 100.00 & **0.643** & **+0.45** & **0.569** & **+0.09** \\ \(\mathrm{pNTK}^{0}\) & 99.99 & 99.97 & 0.344 & +0.87 & 0.125 & +0.13 \\ TraceIn & 99.89 & 99.96 & -0.197 & +9.35 & 0.021 & -7.04 \\ Trak & 0.00 & 0.00 & 0.007 & -72.35 & 0.046 & +0.14 \\ Embedding & 99.71 & 100.00 & 0.430 & -2.73 & 0.261 & -13.98 \\ CK & 1.65 & 50.61 & 0.552 & -3.50 & 0.454 & -81.25 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Poisoned data attribution forensics. We compare the attributions from different kernel functions. Notably, we do not the attribution experiment to be very discriminating between the \(\mathrm{pNTK}\), \(\mathrm{pNTK}^{0}\), TraceIn and Embedding kernels; however, our normalized pNTK maintains the highest correlation to the underlying poisoned neural network.**
Our quantitative experiments consistently show the \(\mathrm{pNTK}\) as more correlated to the neural network model compared to the unnormalized \(\mathrm{pNTK}\), Embedding kernel, and CK. We also observe qualitative differences between each kernel's attributions (Figure 3) including what training datapoints have highest attribution (Figure 4). These differences emphasize the need for this study: given each choice of kernel provides a different explanation, we need to know which kernel is the right choice. No kernel is observed to give a perfect surrogate model, so we relax our question to investigate which kernel is the best fit. Based on the rank correlation \(\tau\), the \(\mathrm{pNTK}\) is empirically the best fit.
LimitationsOur main results studied \(\mathrm{pNTK}\) in kGLM under standard supervised classification and data poisoning techniques; however, previous works have focused on support vector machine (SVM) linear kernel surrogate models instead. We think its likely that the limitations observed for SVM surrogate models extend to the kGLM models. We know of two such limitations. We found that SVM surrogate models fail to replicate neural network behavior under gradient-based adversarial attacks Appendix E. In addition, previous researchers have found that SVM surrogate models do not have the same scaling relationships as underlying neural networks [49]. Future work can endeavor to push the scope of surrogate modeling to these new regimes.
As a limitation of our direct scope, calculating the \(\mathrm{pNTK}\) for a large model with many training datapoints remains computationally expensive. The dimension in the \(\mathrm{pNTK}\) matrix is the size of the training data squared. As the number of training data grows analyses of models using the methods in this study will become impractical. Future work could attempt to address this by finding a smaller subset of the training data that still forms a good surrogate model.
Figure 4: **Visualizing poisoned data attributions.** We compare each kernel method in a data poisoning regime. On the left side of the figure, the top (bottom) image is the unperturbed (poisoned) input. The network outputs show the decision is altered by the attack. On the right, we show the top 3 attributions of each kernel on the unperturbed (poisoned) input. An effective feature space ought to capture when the model is affected by the attack. Qualitatively, we see there are similarities and differences between the conceptual features in the attributions for each kernel. For example, many of the dogs and cats attributed to are looking out of the screen (or “at the camera”) like the test image. The embedding kernel seems overall more sensitive to the background pixel color than the other kernels. We believe this is a fault of the embedding kernel because the background pixel color should not factor heavily into the classification decision on the subject of the image. We include more examples in Appendix I.
## Acknowledgments and Disclosure of Funding
A.W.E., Z.W., S.C., N.F., and T.C. were partially supported by the Mathematics for Artificial Reasoning in Science (MARS) initiative via the Laboratory Directed Research and Development (LDRD) Program at PNNL and A.D.S and T.C. were partially supported by the Statistical Inference Generates kNowledge for Artificial Learners (SIGNAL) Program at PNNL. PNNL is a multi-program national laboratory operated for the U.S. Department of Energy (DOE) by Battelle Memorial Institute under Contract No. DE-AC05-76RL0-1830.
|
2302.00557 | Graph Neural Network Based Surrogate Model of Physics Simulations for
Geometry Design | Computational Intelligence (CI) techniques have shown great potential as a
surrogate model of expensive physics simulation, with demonstrated ability to
make fast predictions, albeit at the expense of accuracy in some cases. For
many scientific and engineering problems involving geometrical design, it is
desirable for the surrogate models to precisely describe the change in geometry
and predict the consequences. In that context, we develop graph neural networks
(GNNs) as fast surrogate models for physics simulation, which allow us to
directly train the models on 2/3D geometry designs that are represented by an
unstructured mesh or point cloud, without the need for any explicit or
hand-crafted parameterization. We utilize an encoder-processor-decoder-type
architecture which can flexibly make prediction at both node level and graph
level. The performance of our proposed GNN-based surrogate model is
demonstrated on 2 example applications: feature designs in the domain of
additive engineering and airfoil design in the domain of aerodynamics. The
models show good accuracy in their predictions on a separate set of test
geometries after training, with almost instant prediction speeds, as compared
to O(hour) for the high-fidelity simulations required otherwise. | Jian Cheng Wong, Chin Chun Ooi, Joyjit Chattoraj, Lucas Lestandi, Guoying Dong, Umesh Kizhakkinan, David William Rosen, Mark Hyunpong Jhon, My Ha Dao | 2023-02-01T16:23:29Z | http://arxiv.org/abs/2302.00557v1 | # Graph Neural Network Based Surrogate Model of Physics Simulation for Geometry Design
###### Abstract
Computational Intelligence (CI) techniques have shown great potential as a surrogate model of expensive physics simulation, with demonstrated ability to make fast predictions, albeit at the expense of accuracy in some cases. For many scientific and engineering problems involving geometrical design, it is desirable for the surrogate models to precisely describe the change in geometry and predict the consequences. In that context, we develop graph neural networks (GNNs) as fast surrogate models for physics simulation, which allow us to directly train the models on 2/3D geometry designs that are represented by an unstructured mesh or point cloud, without the need for any explicit or hand-crafted parameterization. We utilize an encoder-processor-decoder-type architecture which can flexibly make prediction at both node level and graph level. The performance of our proposed GNN-based surrogate model is demonstrated on 2 example applications: feature designs in the domain of additive engineering and airfoil design in the domain of aerodynamics. The models show good accuracy in their predictions on a separate set of test geometries after training, with almost instant prediction speeds, as compared to O(hour) for the high-fidelity simulations required otherwise.
Graph neural network, fast surrogate model, physics simulation
## I Introduction
Computational simulations of physics have become an essential component of modern science and engineering, and the field is growing rapidly. Physics simulations are frequently used as a model of reality to provide insights for understanding and predicting the behavior of physical systems at various levels of detail and accuracy. Simulation models can be computationally very expensive, due to their complex, multi-physics, and multi-scale nature. Therefore, sole use of high-fidelity simulation models for many tasks--such as material discovery, drug discovery, digital twin, computer-aided design--where one needs to quickly survey a vast number of possible scenarios and/or geometries, can quickly become intractable.
CI techniques have shown great potential to learn the nonlinear input-output relationship from existing simulation results, allowing for fast predictions, albeit at the expense of some accuracy. They are used as fast surrogate models of physics simulation to support discovery, design optimization, and decision-making process. Traditional surrogate models typically only predict a single output or a few outputs. The recent quest is to make predictions of the entire finely resolved physical field using advanced deep learning techniques. Deep convolutional neural networks, such as U-Net, have become a popular choice for the high dimensional surrogate modelling task when the data can be easily represented as 2D pixel or 3D voxel [1], and have been successfully applied to engineering problems such as flow past an airfoil [2, 3, 4, 5], micro fluids channel flow [6], stress distribution in a 3D printed part [7, 8], and topological optimization [9].
Many scientific and engineering problems involve geometry design. It is desirable for the geometry, e.g., from molecular structure to the shape of airfoil, to be precisely represented by the surrogate models. The voxel-type representation however has its limitation when describing geometry designs with curvature and/or fine features, and hence this may impede the performance and applicability of convolutional neural network-based surrogate models. Usually, data augmentation will be required when training convolutional neural network models, e.g., to incorporate invariances. On the other hand, graph-based methods represent a promising candidate for modeling geometry [10, 11]. For example, in molecular and materials science domain,
GNNs have been explored for learning the topology of atomic and molecular structures and predicting their properties [12, 13]. GNNs have also shown promise in predicting the time evolution of solid and fluid dynamic systems [14].
In that context, we develop GNN-based surrogate models of physics simulation for geometry design. By utilizing graph representation, the model can more precisely describe the change in geometry with complex features and predict the consequences, which is essential for the design and optimization task. We evaluate the GNN-based surrogate models on 2 engineering design problems, i.e., feature design in the additive manufacturing domain and airfoil design in the aerodynamics domain. The models are not only capable of making fast prediction of an overall performance indicator, but also of predicting the entire physical field of a geometry design, such as residual stress distribution of a 3D printed part and surface pressure distribution on an airfoil, with orders of magnitude speed-up relative to typical high-fidelity simulations.
## II Statement of The Problem
We consider the following surrogate modelling task. The goal is to develop a CI model that allows the computationally efficient evaluation of the physical quantities of interest, e.g., the physical field or an aggregated performance indicator, due to the change in geometry, in addition to any possible change in operating condition. The CI model is therefore required to map the input geometry and operating condition onto the output physical quantities of interest.
The data from physics simulations (e.g., finite element, finite volume, or finite difference methods) are used for training the CI model. Typically, simulations numerically compute the physical quantities on a _mesh_, i.e., a network of discretized cells and node points covering the continuous problem domain. In a nutshell, the simulation mesh can be classified into structured and unstructured mesh types. The structured mesh has a regular connectivity on homogenous, voxel-like cells. On the other hand, the unstructured mesh allows for mixed types of mesh cells and irregular connectivity, making it a preferred choice for handling complex geometries. Moreover, the amount of node points and connecting edges may change significantly given different geometry designs of varying shape, feature, and size. The CI model should be generic enough to learn from such unstructured mesh simulation data.
## III Graph Neural Network Based Surrogate Model of Physics Simulation
### _Graph representation of simulation data_
The unstructured simulation mesh with node points \(V\) connected by mesh edges \(E\) can be represented by a graph \(G=\left(V^{G},E^{G}\right).\) Each node \(i\in V^{G}\) is associated with _node feature_\(\mathbf{v}_{i}\). Each edge \(ij\in E^{G}\) connecting node \(i\) to node \(j\) is associated with _edge feature_\(\mathbf{e}_{ij}\). The graph contains bidirectional mesh edges, i.e., for each edge \(ij\in E^{G}\) there exist an edge \(ji\in E^{G}\) connecting from node \(j\) to node \(i\). Each node point is associated with a positional coordinate \(\mathbf{x}_{i}\) in Cartesian coordinate system, which is to be encoded into both node and edge features. To achieve spatial equivariance, we encode relative coordinate \(\mathbf{x}_{i}-\mathbf{x}_{ref(G)}\) to a reference point \(\mathbf{x}_{ref(G)}\) in the given graph \(G\) into the node feature \(\mathbf{v}_{i}\). We encode relative displacement vector \(\mathbf{x}_{li}=\mathbf{x}_{j}-\mathbf{x}_{i}\) and its L2 norm \(\left\|\mathbf{x}_{ji}\right\|_{2}\) into the edge feature \(\mathbf{e}_{ji}\). The graph is a very general representation in the way that we may encode many other problem related parameters, such as the change in operating condition, into the node or edge features.
### _Graph neural networks for learning simulation data_
For this work, we use a GNN to learn the mapping from the input graph \(G=\left(V^{G},E^{G}\right)\) onto the output physical quantities that we want to model. We adopt an encode-process-decode type architecture (Figure 1), in accordance with prior work in literature [15, 14, 10].
**Encoder.** The encoder embeds each node and edge in the input graph into a latent vector of size \(n_{l}\), using the encoder MLPs \(\varepsilon^{L}\) and \(\varepsilon^{V}\) for edge feature \(\mathbf{e}_{ji}\) and node feature \(\mathbf{v}_{i}\) respectively, i.e.,
\[\mathbf{e}^{\prime}_{ji}\leftarrow\varepsilon^{E}(\mathbf{e}_{ji}), \tag{1a}\]
Figure 1: (a) Comparison of unstructured and structured / voxel meshes on an example geometry from the feature design problem (Section IV-A-1). The voxels are unable to describe features such as slope and hole, despite having a much denser mesh. (b) Example geometry from the airfoil design problem (Section IV-A-2) with nodes and edges. (c) Illustrative schematic of the GNN architecture employed.
\[\mathbf{v^{\prime}}_{i}\leftarrow\mathbf{\varepsilon}^{\nu}(\mathbf{v}_{i}). \tag{1b}\]
**Processor.** The processor recursively updates the embedded latent vectors at each node and edge with \(L\) message passing steps. Each message passing step uses a separate block of processor MLPs \(\rho^{\mathbf{\varepsilon}}\) and \(\rho^{\nu}\) for the latent edge and node features:
\[\mathbf{\dot{e}^{\prime}}_{jl}\leftarrow\rho^{\mathbf{\varepsilon}}(\mathbf{e}^{\prime}_{ jl},\mathbf{v^{\prime}}_{j},\mathbf{v^{\prime}}_{i}), \tag{2a}\] \[\mathbf{\dot{v}^{\prime}}_{i}\leftarrow\rho^{\nu}\big{(}\mathbf{v^{\prime}}_{i },\sum_{j}\mathbf{\dot{e}^{\prime}}_{jl}\big{)}. \tag{2b}\]
The edge processor MLP \(\rho^{\mathbf{\varepsilon}}\) first computes the update for latent edge features by taking the concatenated features of edge and its receiver and sender nodes from the previous step as input. Then, the node processor MLP \(\rho^{\nu}\) takes the concatenated features of node from previous step and all incoming edges (with a sum pooling) at current step as input. Finally, the updates are added back to the latent edge and node features from previous step, creating a residual connection:
\[\mathbf{e^{\prime}}_{jl}\leftarrow\mathbf{e^{\prime}}_{jl}+\mathbf{\dot{e}^{ \prime}}_{jl}, \tag{3a}\] \[\mathbf{v^{\prime}}_{i}\leftarrow\mathbf{v^{\prime}}_{i}+\mathbf{\dot{v}^{ \prime}}_{i}. \tag{3b}\]
**Decoder.** The latent node features are then passed to 2 separate decoders, i.e., a graph decoder MLP \(\delta^{\mathbf{\varepsilon}}\) and a node decoder MLP \(\delta^{\mathbf{\varepsilon}}\):
\[\mathbf{y}_{\mathbf{\varepsilon}}\leftarrow\delta^{\mathbf{\varepsilon}}(\tfrac{1}{n(\mathbf{ \varepsilon}^{\mathbf{\varepsilon}})}\sum_{i\in\mathbf{\varepsilon}^{\mathbf{\varepsilon} }}\mathbf{v^{\prime}}_{i}), \tag{4a}\] \[\mathbf{y}_{i}\leftarrow\delta^{\mathbf{\varepsilon}}(\mathbf{v^{\prime}}_{i},\mathbf{y}_{\mathbf{ \varepsilon}}). \tag{4b}\]
The graph decoder MLP extracts the graph level feature \(\mathbf{y}_{\mathbf{\varepsilon}}\) from the average pooling of all graph nodes. We denote \(n(\mathbf{V}^{\mathbf{\varepsilon}})\) as the total number of nodes in \(\mathbf{\varepsilon}\). For a graph level prediction task, \(\mathbf{y}_{\mathbf{\varepsilon}}\) becomes the output. For node level prediction task, \(\mathbf{y}_{\mathbf{\varepsilon}}\) will then be concatenated with the latent node features \(\mathbf{v}^{\prime}_{i}\) and passed to the node decoder MLP \(\delta^{\mathbf{\varepsilon}}\) to predict the physical quantities \(\mathbf{y}_{i}\) that we want to model.
## IV Results and Discussion
We evaluate the GNN-based surrogate models on 2 engineering problems, namely the feature design in the additive manufacturing domain and airfoil design in the aerodynamics domain.
### _Description of problem domain and input encoding_
#### Iv-A1 Feature design in additive manufacturing
The laser powder bed fusion process (LPBF) in additive manufacturing can fabricate parts with intricate geometries, providing a high level of design freedom for designers to improve the functional performance of products. While most existing studies in additive manufacturing domain have focused on simple geometry, e.g., regular, bulky shapes, such surrogate models may not be sufficient for the co-design of product and process. A more general surrogate model that can quickly evaluate the functional performance due to any change in shapes and features is needed. With the aim of predicting a wide range of geometries that can arise during the design process, [7] trained a U-Net model on a large set of 3D geometries comprising different combinations of basic geometric features, e.g., circular struts, square struts, and walls. The U-Net model however can have poor voxelization when representing certain features such as circular and thin struts.
In this work, we train a GNN on a similar set of geometries. The objective of the surrogate model is to predict the physical field, e.g., residual stress (measured through the scalar von Mises stress) on every node point, for features with different shape, sizes, orientations, and intersections. We utilize graph (unstructured mesh) representation which can more precisely describe these features (Figure 1a). Moreover, we do not need to perform the data augmentation as described in [7], since our encoding already accounts for spatial equivariance. 12 geometries (test set 1) are reserved for testing. The remaining 458 geometries (train set) are used for model training. Furthermore, 7 challenging geometries (test set 2) are designed to evaluate the out-of-distribution performance of the model: 3 contained more complex combinations of the circular struts, square struts, and walls; 2 are the original and optimized designs of an engineering bracket; the other 2 are the original and optimized designs of
Figure 2: In blue: selected feature designs used for model training (train set). In red: 7 challenging geometries (test set 2) are designed to evaluate the out-of-distribution performance of the model.
a bike stem. Selected feature designs are displayed in Figure 2. A high fidelity LPBF process full order simulation [16] was used to generate the ground truth for all these geometries.
When computing relative coordinates given a graph \(G\), the reference point \(\mathbf{x}_{ref(G)}\) is chosen to be \(\left(x_{median}\cdot y_{median},0\right)\) which is the median node points in \(x\) and \(y\) axes. This is because the geometry is fixed at the printing direction which is along \(z\) axis. In addition to the relative coordinate, we encode its L1 norm \(\left\|\mathbf{x}_{i}-\mathbf{x}_{ref(G)}\right\|_{1}\) into the node feature \(\mathbf{v}_{l}\). A node \(i\) may belong to multiple cells which are from a different cell type. We further encode cell types into \(\mathbf{v}_{l}\) using one hot vectors \(\mathbf{I}_{cell,type_{l}}\). The \(\mathbf{v}_{l}\) also includes the number of connecting nodes, \(n_{e_{l}}\). The additional information may help the GNN to better identify abstract features across different graphs. The input features and target output are normalized to an appropriate range for GNN training.
#### Iii-A2 Airfoil design in aerodynamic
The airfoil design is an important problem in aerodynamic [5]. Variables of interest of the design problem include the pressure distribution on the airfoil surface, the total lift and drag coefficients. As an airfoil is a thin object with a sharp and long tail, precise representation of its shape is crucial for accurate prediction of its aerodynamic performance. In this work, we train the GNNs to predict the surface pressure on airfoil, as well as 2 derived performance indicators: drag and lift coefficients \(C_{D}\), \(C_{L}\). We leverage the huge database of Reynolds-averaged Navier-Stokes (RANS) simulations generated by [3], comprising of 1,505 2D airfoil shapes obtained from the UIUC Airfoil Coordinates Database [17]. Each airfoil shape was simulated with several freestream conditions \(\left(u_{o},v_{o}\right)\). We extracted and post-processed a total of 6,372 simulation solutions (train set) with 1,448 airfoils for model training. Specifically, we construct a graph for the airfoil shape by using the surface node points. Each node is only connected to its nearest left and right neighbor nodes along the airfoil surface (Figure 0(b)). We also extracted and post-processed 90 simulation solutions (test set 1) with 30 airfoils that are not used for training, to evaluate the generalization of the model.
When building the graph \(G\) for a given airfoil, we use \(\mathbf{x}_{ref(G)}\)\(=\) (0,0) as reference point. This is because all the airfoil shapes are already scaled to \(x\in\) [0,1], with the leading edge positioned at (0,0). We also encode the given freestream condition \(\left(u_{o},v_{o}\right)\) into the node features \(\mathbf{v}_{l}\), for all the nodes \(i\in V^{G}\). In addition, we use one hot vectors \(\mathbf{I}_{u/l_{1}}\) to indicate whether a given node \(i\) belongs to the upper or lower surface. The input features are normalized to an appropriate range for GNN training. Like [3], we normalize the target pressure by dividing it with the freestream velocity \(vel\)\(=\)\(u_{o}\)\({}^{2}+\)\(v_{o}\)\({}^{2}\), followed by a de-mean. We do not apply normalization to the target \(C_{D}\) and \(C_{L}\).
### _Model architecture, training, and evaluation_
We build a node level target model for residual stress from the feature design dataset, and a node level target model for pressure from the airfoil design dataset. For the same dataset, we build separate graph level target model for \(C_{D}\) and \(C_{L}\). The architecture and training configurations used for these 4 models are summarized in Table I. For each model, we tested several architecture variants and training parameters. It was found that they are robust to many choices, such as the depth \(n_{d}\) and width \(n_{w}\) of each MLP, the size of latent features \(n_{l}\), the number of message passing steps \(L\), and the batch size, as long as they are within a certain range.
All the MLPs use _sine_ activation, which is found to be a better choice compared to other activations such as _tanh_ and ReLU, in terms of solution quality. However, a _linear_ activation is used at the final output layer for pressure, \(C_{D}\) and \(C_{L}\) models since their values span from negative to positive. The residual stress is strictly nonnegative hence we use a ReLU activation at the final output layer of the model.
We initialize the models using _He_ method and train them with ADAM optimizer for a fixed number of epochs. We use the mean absolute error (MAE) loss with L1 regularization, which tends to give better training and generalization
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Problem & No. training & Output & Input graph: & Input graph: & MLP architecture (depth \(n_{d}\), & No. & Training \\ domain & training & output & type & node \(\mathbf{v}_{l}\) & edge \(\mathbf{e}_{l}\) & width \(n_{w}\) output size \(n_{l}\) for & message & (loss, no. epoch, \\ & sample & (graph) & & & & & encoder \(\pi^{x}\), \(\varepsilon^{y}\), processor \(\varepsilon^{y}\), & passing & batch size, initial \\ & (graph) & & & & & & \(\rho^{y}\), and decoder \(\delta^{c}\), \(\delta^{y}\) & step, \(L\) & learning rate) \\ \hline
1. Feature design & 458 & Residual & Node & \(\mathbf{x}_{l}-\mathbf{x}_{ref(G)}\), & \(\mathbf{x}_{l_{1}}\), & \(\left\|\mathbf{x}_{i}\right\|_{2}\) & \(\varepsilon^{x}\), \(\varepsilon^{y}\), \(\rho^{x}\), \(\varepsilon^{y}\): & 6 & MAE \(+\) L1 reg. \\ design & \(\uparrow\) & stress & level & \(\left\|\mathbf{x}_{l}-\mathbf{x}_{ref(G)}\right\|_{1}\) & & \(n_{d}\)\(=\)\(4,n_{w}\)\(=\)\(64\), & & epoch=\(2000\), \\ (3D) & 46 – 3492 & & & & \(\mathbf{I}_{cell,type_{l}}\), \(n_{e_{l}}\) & & \(\delta^{c}\): \(n_{l}\)\(=\)\(4,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(4\) & & batch size=\(16\), \\ & nodes & & & & & \(\delta^{c}\): \(n_{u}\)\(=\)\(4,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(1\) & & initial\_r=\(5\)e-4 \\ \hline Airfoil design & 6372 & Pressure & Node & \(\mathbf{x}_{l}-\mathbf{x}_{ref(G)}\), & \(\mathbf{x}_{l_{1}}\), & \(\left\|\mathbf{x}_{i}\right\|_{2}\) & \(\varepsilon^{x}\), \(\varepsilon^{y}\), \(\rho^{x}\), \(\varepsilon^{y}\): & 5 & MAE \(+\) L1 reg. \\ design & \(\uparrow\) & level & & & \(n_{d}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(64\) & & epoch=\(3000\), \\ (2D) & 194 – 660 & & & & & \(\delta^{c}\): \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(4\) & & batch size=\(32\), \\ & nodes & & & & & \(\delta^{c}\): \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(1\) & & initial\_r=\(5\)e-4 \\ \hline Airfoil design & 6372 & Drag & Graph & \(\mathbf{x}_{l}-\mathbf{x}_{ref(G)}\), & \(\mathbf{x}_{l_{1}}\), & \(\left\|\mathbf{x}_{i}\right\|_{2}\) & \(\varepsilon^{x}\), \(\varepsilon^{y}\), \(\rho^{x}\), \(\varepsilon^{y}\): & 5 & MAE \(+\) L1 reg. \\ design & \(\uparrow\) & coefficients, & level & \(\mathbf{I}_{u/l_{1}}\), & \(\mathbf{I}_{u/l_{1}}\), & \(\mathbf{I}_{u/l_{1}}\), & \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(64\) & & epoch=\(500\), \\ (2D) & 194 – 660 & \(C_{D}\) & & & & \(\delta^{c}\): \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(1\) & & batch size=\(64\), \\ & nodes & & & & & \(\delta^{c}\): \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(64\), \(n_{i}\)\(=\)\(1\) & & initial\_r=\(5\)e-4 \\ \hline Airfoil design & 6372 & Lift & Graph & \(\mathbf{x}_{l}-\mathbf{x}_{ref(G)}\), & \(\mathbf{x}_{l}\), & \(\left\|\mathbf{x}_{i}\right\|_{2}\) & \(\varepsilon^{x}\), \(\varepsilon^{y}\), \(\rho^{x}\), \(\varepsilon^{y}\): & 5 & MAE \(+\) L1 reg. \\ design & \(\uparrow\) & coefficients, & level & \(\mathbf{I}_{u/l_{1}}\), & \(\mathbf{I}_{u/l_{1}}\), & \(n_{u}\)\(=\)\(5,n_{w}\)\(=\)\(32\), \(n_{i}\)\(=\)\(32\) & & epoch=\(500\), \\ (2D) & 194 – 660 & \(C_{L}\
outcomes. The models are sensitive to the learning rate. We set an initial learning rate at 5e-4 or 1e-3, then reducing it by half on plateauing. The training batch size is set to be 16, 32, or 64. During the mini-batch training, all samples (graphs) within the batch are merged into a single disconnected global graph for performing forward and backward passes.
Finally, the trained models are evaluated by the relative L2 norm error,
\[\epsilon_{R}=\frac{\left\|\mathbf{v}_{target}-\mathbf{v}_{predict}\right\|_{2}}{\left\| \mathbf{v}_{target}\right\|_{2}}\times\mathbf{100\%}. \tag{5}\]
For the node level target models, we first compute their \(\epsilon_{R}\) based on all node-wise predictions in each individual graph. We then report the statistics, i.e., median (min., max.), of all the graphs' \(\epsilon_{R}\). For the graph level target models, we report a single \(\epsilon_{R}\) value computed from all the predictions. The performance of models on respective data sets are reported in Table II.
### _Modelling results on feature design problem_
The residual stress model achieves a performance of about 9% median \(\epsilon_{R}\) on both train and test set 1. From the visualization results presented in Figure 2(a), we observe a good agreement between the model prediction and the ground truth solution for the geometries in test set 1. This shows that the GNN-based surrogate model can adequately learn from the training geometries and predict the quantitative outcome on the test geometries with a similar accuracy. Although not previously seen during the training process, the geometries in test set 1 and train are homogenous, since they are designed from the same set of basic features with different sizes, orientations, and intersections.
Fig. 3: For both (a) & (b), the simulation solutions used as ground truth are displayed at the top rows. The model predictions and their absolute errors are displayed at the middle and bottom rows, respectively.
The prediction results of more challenging feature designs in test set 2 are visualized in Figure 2(b). For the 3 geometries comprising more complex combinations of the basic features, their \(e_{R}\)'s are 14.55%, 20.07%, and 28.49%--almost 2-3 times higher than the median \(\epsilon_{R}\) seen in train and test set 1. Note that the model has difficulty predicting the high residual stress around the intersection and joint areas, which could have great impact on the quality of product--the designer usually needs to pay attention to the high stress area. The original and optimized engineering bracket designs contain new features, such as triangle, hole, and arch, that are unseen by the model during the training. Their \(\epsilon_{R}\)'s are 17.14% and 34.82% respectively. Large prediction errors are found near the bottom holes. Moreover, the \(\epsilon_{R}\) for original and optimized like stem designs are 43.12% and 48.96%, respectively. These 2 designs are bulky, and they contain a lot of curvatures, which are significantly different from the training features in both size and shape.
Overall, the model performance is worse when being applied to out-of-distribution designs. Although it is possible to further reduce the prediction error on train and test set 1 by using a different set of model and training hyperparameters, these models usually do not demonstrate any accompanying performance improvement on test set 2. The performance of our GNN models is consistent to the prior study on a similar dataset while using the state-of-the-art U-Net model [7]. The U-Net uses a different (voxel) representation and is trained on a slightly larger set of geometries with data augmentation. The model gives prediction error between 14-29% on 4 out-of-distribution designs (which is the 1\({}^{\text{st}}\), 3\({}^{\text{st}}\)-5\({}^{\text{th}}\) geometry in our test set 2).
Although it is difficult to directly compare the computation time for different methods due to differences in implementation, our GNN model leads to a significant reduction in runtime as compared to the full order simulation, which requires approximately 1-4hours for simulating the residual stress distribution for each geometry [7], as compared to the instant prediction (\(<\) 1second) by the GNN model.
### _Modelling results on airfoil design problem_
The pressure model achieves a performance within the range of \(\epsilon_{R}\)=3.47% and 28.03% on the test set, which consists of 30 airfoil designs that are not seen by the model during the training, and there are 3 random freestream conditions for each design. Their median \(\epsilon_{R}\) is 8.71%. Figure 3(a) compares the prediction with the simulation ground truth, for 4 different airfoil designs and freestream conditions. The predicted pressure values are very accurate judging by the visual
Figure 4: (a) The pressure prediction on upper (U) and lower (L) surfaces of the airfoil is plotted against the simulation ground truth. The nested plot shows the airfoil design and the freestream condition that are used as the model input. (b-c) The model prediction for \(C_{D}\) and \(C_{L}\) are plotted against the simulation ground truth. The diagonal dotted line indicates a perfect fit to the simulation ground truth.
Figure 5: The relative L2 norm error, \(\epsilon_{R}\) for pressure, \(C_{D}\) and \(C_{L}\) between model prediction and simulation ground truth on test data. Predictions with high \(\epsilon_{R}\), as indicated by reds and dark rods, are more commonly found near to the edge of the distribution in the 2D freestream condition input space.
inspection, even though the quantitative \(\epsilon_{R}\) is around 9%. Note that the high-pressure profile may occur at either the upper or lower surface, depending on the freestream condition \(\nu_{o}\). Figure 4b-c present the prediction results for \(\mathcal{C}_{D}\) and \(\mathcal{C}_{L}\). Their \(\epsilon_{R}\)'s are 14.23% and 8.41% respectively on the same test set.
It is observed that there are some designs, e.g., a very thick airfoil, and certain freestream conditions where the model prediction has a large \(\epsilon_{R}\). These samples generally have a more complex pressure profile. Even so, the pressure model can still capture the general trend, as shown in the bottom panel of Figure 4a. Although it is difficult to verify whether those with large \(\epsilon_{R}\) are indeed entirely due to out-of-distribution designs, we note that the worse predictions are usually clustered around the edge of sample distribution in the freestream condition space, as shown in Figure 5.
Overall, we obtain satisfying results on surface pressure, \(\mathcal{C}_{D}\) and \(\mathcal{C}_{L}\) models. We believe that the issue of out-of-distribution designs will not be too severe, given that these models are trained on a large set of airfoil designs and freestream conditions. These models have the potential to replace simulations for the design of new airfoil on a range of operating conditions. The OpenFOAM simulation time for computing the flow solution on a new airfoil in current dataset takes around 40-70seconds.
## V Conclusion
In this work, we apply GNN model for the surrogate modelling task of predicting the functional performance of different geometrical designs in engineering applications. The GNN allows us to directly train the model on unstructured mesh simulations using graph representation, which can more precisely describe the change in geometry with complex features and predict the consequences.
We present the modelling results on 2 applications: feature design in the domain of additive engineering and airfoil design in the domain of aerodynamics. For the feature design task, the model is designed to predict the residual stress--which has great impact on the quality of the printed product--for features with different shape, sizes, orientations, and intersections. For the airfoil design task, we develop models for predicting the pressure on airfoil surface, as well as key performance indicators in drag and lift coefficients. These models have been shown to provide good accuracy on a separate set of test geometries which share similar features and characteristics to the training geometries. However, we have found that generalization remains a key issue for the residual stress model where the number of training geometries is smaller. The prediction error on out-of-distribution designs is almost 2-5 times higher than the geometries used for training. Further tests on different scenarios and application domains, ideally with an extended dataset, to confirm the potential of GNN-based surrogate model can be beneficial.
The strength of the surrogate models is their prediction speed. They can make predictions for new geometry designs almost instantly, as compared to O(hour) for the high-fidelity simulations typically used in most engineering applications. Our future work includes integrating the GNN-based surrogate models into generative design process in conjunction with other CI techniques such as generative adversarial network.
## Acknowledgment
This research is supported by A*STAR under the AME Enzymatic Programme: "Explainable Physics-based AI for Engineering Modelling and Design (ePAI)" Award No. A20H5b0142 and the IAF-PP project: "Industrial Digital Design and Additive Manufacturing Workflows (IDDAMW)" Award No. A19E10097.
|
2305.16392 | Using neural networks to model Main Belt Asteroid albedos as a function
of their proper orbital elements | Asteroid diameters are traditionally difficult to estimate. When a direct
measurement of the diameter cannot be made through either occultation or direct
radar observation, the most common method is to approximate the diameter from
infrared observations. Once the diameter is known, a comparison with visible
light observations can be used to find the visible geometric albedo of the
body. One of the largest datasets of asteroid albedos comes from the NEOWISE
mission, which measured asteroid albedos both in the visible and infrared. We
model these albedos as a function of proper elements available from the
Asteroid Families Portal using an ensemble of neural networks. We find that
both the visible and infrared geometric albedos are significantly correlated
with asteroid position in the belt and occur in both asteroid families and in
the background belt. We find that the ensemble's prediction reduces the average
error in albedo by about 37% compared to a model that simply adopts an average
albedo, with no regard for the dynamical state of the body. We then use this
model to predict albedos for the half million main belt asteroids with proper
elements available in the Asteroid Families Portal and provide the results in a
catalog. Finally, we show that several presently categorized asteroid families
exist within much larger groups of asteroids of similar albedos - this may
suggest that further improvements in family identification can be made. | Zachary Murray | 2023-05-25T18:00:15Z | http://arxiv.org/abs/2305.16392v1 | Using neural networks to model Main Belt Asteroid albedos as a function of their proper orbital elements
###### Abstract
Asteroid diameters are traditionally difficult to estimate. When a direct measurement of the diameter cannot be made through either occultation or direct radar observation, the most common method is to approximate the diameter from infrared observations. Once the diameter is known, a comparison with visible light observations can be used to find the visible geometric albedo of the body. One of the largest datasets of asteroid albedos comes from the NEOWISE mission, which measured asteroid albedos both in the visible and infrared. We model these albedos as a function of proper elements available from the Asteroid Families Portal using an ensemble of neural networks. We find that both the visible and infrared geometric albedos are significantly correlated with asteroid position in the belt and occur in both asteroid families and in the background belt. We find that the ensemble's prediction reduces the average error in albedo by about 37% compared to a model that simply adopts an average albedo, with no regard for the dynamical state of the body. We then use this model to predict albedos for the half million main belt asteroids with proper elements available in the Asteroid Families Portal and provide the results in a catalog. Finally, we show that several presently categorized asteroid families exist within much larger groups of asteroids of similar albedos - this may suggest that further improvements in family identification can be made.
Small Solar System bodies, Neural networks, -- Albedo -- Asteroid dynamics +
Footnote †: journal: AASTeX631
## 1 Introduction
One of the most fundamental properties of an asteroid is its diameter; it is a prerequisite to compute an asteroid's bulk density if the asteroid's mass is known, or conversely, can be used to infer a mass - if its density can be otherwise estimated. However, the diameter of an asteroid is often difficult to estimate. Most surveys of asteroids occur in the visible portion of the spectrum. These surveys record a flux from the object which - when paired with knowledge of its orbit - can be used to derive an
absolute or H magnitude. Unfortunately, since the observed flux of the asteroid in these wavelengths is reflected light, an estimate of the diameter of an object cannot be made without knowledge of the asteroid's albedo.
Given this limitation, several different methods have been developed to more accurately constrain asteroid albedos and diameters. One of the most direct methods is through occultations (e.g., Tanga & Delbo, 2007; Herald et al., 2020). In such an event, an asteroid passes in front of a distant star, blocking its light from the perspective of an observer for a few seconds. The asteroid's diameter can then be directly estimated from the duration of occultation and knowledge of the asteroid's orbit. Unfortunately, occultation events are relatively rare, since they require precise alignment of the asteroid and background star. Hence, occultation-derived diameters are not available for most asteroids.
A second way to directly measure an asteroid's diameter is through radar observations. Such observations are often of sufficiently high resolution to detect the angular extent of an asteroid outright, which makes calculating the diameter straightforward (e.g., Ostro et al., 2000, 2006). However, the inverse square law limits radar observations to only relatively close asteroids, consequently the population of asteroids that can be directly measured in this way is also rather small.
Finally, there are infrared observations. At infrared wavelengths the asteroid's flux is primarily thermal emission and is proportional to its surface area (e.g., Allen, 1971). Hence, an estimate of the diameter can be derived from these observations, with the albedo being estimated by comparison with the visible reflected flux. In practice, this method is by far the most productive way of estimating diameters, as infrared surveys can easily cover large portions of the sky. The largest of these surveys is the WISE/NEOWISE survey which has provided nearly 200,000 albedo measurements of solar system bodies, including \(125,217\) unique main belt asteroids (e.g., Masiero et al., 2011; Grav et al., 2012; Masiero et al., 2014, among others). Other infrared missions have also contributed to the total of known albedos, including the IRAS mission (Tedesco et al., 2002), AKARI survey (Usui et al., 2011) and the Spitzer Space Telescope (Gustafsson et al., 2019). This total, however, is a fraction of all the asteroids cataloged by the Minor Planet Center - which as of June 2022 number over a million. Consequently, the albedos of most asteroids remain unknown. This fraction will only decrease as new surveys discover more asteroids, with the Rubin Observatory alone projected to discover up to five million in the main belt (LSST Science Collaboration et al., 2009).
Motivated by the observation of Masiero et al. (2014) that asteroids belonging to the same family tend to have similar albedos, we wish to understand how much information an asteroid's proper orbital elements contain about its albedo. We investigate this question by constructing a model to predict albedo as a function of proper orbital elements for asteroids in the main belt.
Proper elements are essentially time-averaged Keplerian orbital elements that serve as quasi-invariants of motion. They remain unchanged over long time scales (Lemaitre, 1993). It is useful to contrast these with the osculating orbital elements, which are the instantaneous Kepler elements of an asteroid at a given time. These elements change over timescales as short as a few thousand years (except for the mean anomaly which changes over a single orbit), with the change primarily being driven by gravitational perturbations from other planets and asteroids. Although the instantaneous difference between the two elements is typically small (see examples in Knezevic et al., 2002), the stability of the proper orbital elements make them particularly suitable for studying asteroid families (e.g. Hirayama, 1922; Lindblad & Southworth, 1971; Zappala et al., 1990; Zappala et al., 1995).
These families are often tightly clustered in the space of proper elements, whereas examining the same family in the space of osculating elements results in much looser clustering, due to the short timescale differential change in the elements.
A variety of different methods to compute proper orbital elements have been developed, including analytic models (Yuasa, 1973; Knezevic, 1989; Milani & Knezevic, 1994), semi-analytic models (Lemaitre & Morbidelli, 1994; Gronchi & Milani, 2001; Fenucci et al., 2022) and numerical approaches (Knezevic & Milani, 2000). In this paper, we concern ourselves with the proper elements provided by the Asteroid Families portal site (Novakovic & Radovic, 2019).
## 2 The Model
The Asteroid Families Portal provides proper elements for nearly \(600,000\) numbered main belt asteroids, computed with the methods of (Knezevic & Milani, 2000, 2003), and family classifications derived using the hierarchical clustering method developed in (Radovic et al., 2017). In our sample, we exclude the active main belt objects, as their activity may result in variations in the measurement of their fluxes or cause their albedos to change over time. We use the results of the NEOWISE asteroid survey as our training data set, as it is currently the largest such set of data available (Mainzer et al., 2019). The overlap between the two datasets is significant, not only including many main belt asteroids, but also the Hungaria and Hilda asteroids. Notably absent here are the Jupiter Trojans, as proper elements for the Jupiter Trojans must be computed with different methods than those outlined above - with interaction with Jupiter taken into account. In addition, (Grav et al., 2011) suggests these Trojans are largely homogeneous in albedo compared to the main belt. Hence there is likely less insight to be gained by constructing a model that incorporates their dynamical states. Our training sample thus consists of a total of 122,309 asteroids both have measured NEOWISE albedos and computed proper elements. Among these, several asteroids were observed multiple times, and their albedos were taken to be the average of the estimated albedos, with the corresponding uncertainty computed by quadrature-addition of the individual errors. There was also a small sample of asteroids with negative reported albedos - these were removed from the sample.
As is common in the literature we work with the logarithm of the albedos rather than the albedos themselves. This parameterization allows for more contrast between dark asteroids with similar albedos. It is also particularly convenient for determining the diameter, as the logarithm of the visible albedo can be related directly to the logarithm of the diameter by
\[\log_{10}(D)=3.1236-0.5\log_{10}(p_{V})-0.2H, \tag{1}\]
where \(p_{V}\) is the visible geometric albedo, \(H\) the absolute visible magnitude and \(D\) the diameter in kilometers (e.g Harris & Harris, 1997). Fig (1) shows the distribution of the logarithms of the visible and infrared albedos of the main belt asteroids as measured by NEOWISE. As noted by many previous studies (e.g. Morrison, 1977; Masiero et al., 2011; Usui et al., 2013) the distribution of visual albedo in the main belt is bimodal, with peaks near 0.05 and 0.3, with the highest peak largely being from the contribution of darker asteroids in the outer main belt. The visible albedo \(p_{V}\) and the infrared albedo \(p_{IR}\) observed by NEOWISE are generally highly correlated with each other, with the former being smaller than the latter for most asteroids, similar to what has been found among the Trojan asteroids (Grav et al., 2011).
We model the variation in both albedos as a function of the proper eccentricity \(e\), the \(\sin\) of the proper inclination \(\sin(i)\), and the proper semi major axis \(a\). We conducted a variety of prelimi
nary tests by employing different machine learning techniques on the data set including, polynomial regression, the decision tree and random forest regressor algorithms in the SCIKIT-Learn python module (Buitinck et al., 2013), in addition to neural nets implemented with TensorFlow (Abadi et al., 2015) to assess their suitability for making predictions on the data set. We found that the decision tree and random forest algorithms performed worse than the neural nets as measured by the root mean squared error residual. This is likely because predictions made by these algorithms are based off a discrete criterion, which is poorly suited for continuous variables like proper elements. Polynomial regression, while continuous, lacked the needed complexity to fit the subtle detail in the belt. Consequently, we use neural nets as our algorithm of choice for this data set. This approach has also shown promise in application to several other problems relevant to the study of small solar system bodies, including taxonomy (Penttila et al., 2021), image classification (Duev et al., 2021) and dynamics (Carruba et al., 2022, 2022). We use a relatively shallow network consisting of six layers of nodes with the "relu" activation function implemented by TensorFlow. We assess the quality of the fit with a root-mean-squared-error loss function and optimize using stochastic gradient descent.
We simulate the effect of observation error by re-running the fitting procedure 1000 times on randomly generated samples of the data with each albedo being randomly chosen from a distribution with probability given by
\[\begin{cases}0&x<0\\ \frac{C}{\sigma\sqrt{2\pi}}\exp(-\frac{1}{2}\frac{(x-\mu)^{2}}{\sigma^{2}})& 1\geq x\geq 0\\ 0&x>1,\end{cases} \tag{2}\]
where \(\mu\) is the albedo and \(\sigma\) its corresponding error in the NEOWISE dataset. Here \(C/2=erf(\frac{1-\mu}{\sqrt{2\sigma}})+erf(\frac{\mu}{\sqrt{2\sigma}})\) is a normalization constant, which can be written in terms of the error function (\(erf(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}dt\). Equation (2) is a truncated normal distribution, which prevents unphysical albedos
Figure 1: Here we show the distribution of the logarithm of the visible and infrared albedos from the NEOWISE catalog. The distribution of both distributions are bimodal, with the visible albedos peaking near \(p_{V}=0.05\) and \(p_{V}=0.3\) and the infrared albedos peaking near \(p_{IR}=0.1\) and \(p_{IR}=\)0.4
from being generated. We use a training-validation split fraction of 0.2 for each fit and track both the loss on the training set and the validation set as a function of the training epoch. As training proceeds, the loss on the test set decreases, while the loss of the validation set reaches a minimum after some time, as shown in Fig (2). After this point, the fit to the training set improves while the loss on the validation set beings to worsen, implying that the model is beginning to over-fit. To avoid over-fitting we terminate training our model when the error on the validation set is minimized and use the weights at this epoch for our predictions. We perform this procedure on both the visual albedos and the infrared albedos, since the former is useful for predicting the diameter of asteroids and the latter has shown promise for use as a proxy of asteroid taxonomy (Masiero et al., 2014).
## 3 Results
We show a compilation of our results for visible albedo in Fig (3), including predictions and relevant errors. While we will focus our discussion on the visible albedo \(p_{v}\), the tight correlation between visible and infrared albedo ensure that all of the conclusions in this section generalize to the infrared albedo as well. We generate our predictions by taking the average of the predictions made by our ensemble of neural nets on the training set. These predictions are shown in the second row of Fig (3) This prediction necessarily smooths the true albedos provided by NEOWISE. As a result of the smoothing, significant structure in the main belt becomes more obvious, with large groups of asteroids with similar albedos clearly visible. In addition, several other subtle trends, including an increase in average albedo with inclination in the middle belt, become more clearly visible. Our averaging procedure also allows us to quantify the effects of measurement errors on our mean predictions. Since each individual net in the ensemble fits a slightly different data set with simulated noise, the standard deviation of the individual predictions gives a numerical measure of the effect of NEOWISE measurement errors on the predictions of the ensemble. As shown in the third row of Fig (3),
Figure 2: Here we show both the training (black) and validation (red) root mean square error vs epoch for a single representative sample of our ensemble of fits to the visible albedo. The validation loss rapidly decreases at early epochs before starting to decrease more slowly after epoch 500. It reaches a shallow minimum near epoch 1100 before gradually increasing and overfitting at later epochs. The dashed vertical line denotes the epoch of minimum validation loss. The weights from this epoch are used in the ensemble.
Figure 3: Here we show the true visible albedos as reported by NEOWISE (top row), the mean predicted visible albedos (second row) and the averaged model uncertainty (third row) and the stochasticity of the belt (final row) as a function of the proper elements (note the difference in color scales between each row). We can see large groups of asteroids, corresponding to different families, tend to have low albedos, these families make up a significant portion of the inner asteroid belt. In addition, there is a clear, but highly structured gradient in albedo from the inner to outer main belt. The smoothing in the predicted albedos makes some structure in the main belt more obvious, such as the large region inclined asteroids of asteroids with low albedo in the central main belt. The third row shows that model uncertainties due to observational error tend to be largest in sparsely populated parts of the belt, whereas the final row shows that areas near asteroid families tend to be relatively homogeneous in albedo
measurement error tends to have the largest impact on the predictions in regions of the belt where there are few measured asteroids. In dense regions, like the outer belt, there are enough observations that the prediction effectively averages the errors in the measured albedos of individual asteroids, leading to a small spread between models and higher fidelity predictions. Overall, we find that the error in model prediction is relatively small, with \(\sigma_{pred}<0.05\) for most of the belt.
While the spread in models gives us a notion of the sensitivity to measurement error, it does not take into account the error due to the inherent stochasticity of albedo in the belt. To quantify this we consider the absolute residuals between our averaged prediction and the NEOWISE measured albedo, and fit a neural network to it using the same parameters as explained in Section 2, except for using a slightly higher test-train fraction of 0.3 chosen since the residuals are noisier than the albedos themselves. As can be seen in Fig (5), these residuals are nearly symmetric and almost normally distributed, hence the distribution of the absolute residuals is very nearly that of a half-normal distribution. Fitting with a neural net with our chosen loss function will act like an adaptive moving average, however, statistical distributions are typically described using their standard deviations. Hence, to convert the uncertainty due to stochasticity in the belt into \(\sigma_{belt}\), we inflate the neural network predictions by a factor of \(\sqrt{\pi/2}\), which is the appropriate factor for a half-normal distribution. The scaled predictions from this net are shown in the final row of Fig (3). \(\sigma_{belt}\) is therefore the uncertainty in the neural net prediction of the logarithm of the albedo due to stochasticity in the belt. This reveals that the parts of the belt near asteroid families and the outer belt tend to be homogeneous in albedo, further they reveal that the middle belt (\(2.5-3.0\) AU) exhibits the largest diversity in asteroid albedo particularly at very low and very high \(\sin(i)\). These regions likely exhibit greater diversity as they correspond to areas where families of high albedo asteroids mix with those of lower albedo, (a correspondence that can be seen in Fig (3) this diversity results in a broader local distribution of albedo and a higher inferred \(\sigma_{belt}\). Finally, we find the intrinsic stochasticity in the belt is larger, by nearly a factor of 5, than the uncertainty in the mean prediction due to measurement error. Therefore, for most asteroids, the total error in the albedo prediction for a certain set of proper elements can be approximated by \(\sigma_{belt}\) alone. This implies that our results are largely insensitive to the measurement error and that future improvements in measurement accuracy will not have a large effect on the belt-wide predictions.
Outside of the illuminating structure in the belt, the main use case for this model is to predict albedos of asteroids for which infrared or other observations aren't available but whose orbits are known. These asteroids will be disproportionately small and dim. Small bodies are subject to orbital perturbations from the YORP-Yarkovsky effects and radiation pressure, this can cause significant changes in their orbital semi-major axes and their proper elements. These perturbations cause asteroid families to spread and hence cause small bodies to drift far from other bodies of similar albedos, weakening the correlations with the proper elements (Bottke et al., 2001).
These effects suggest that the accuracy of our predictions may shrink for smaller asteroids, which are more susceptible to these, and other, effects. To test for this we compute the mean absolute residual between our predictions and the NEOWISE observations as a function of the diameter. These diameters were included in the set of NEOWISE observations and were computed using the observed visible geometric albedos. We find the residual decreases approximately linearly with the \(\log_{10}(D)\) for increasing D where \(D\) is the asteroid diameter over the interval \(D<20.0\)km, with the effect over this range being approximately described as \(|p_{true}-p_{pred}|=c_{1}+\log_{10}(D)c_{2}\), with
\(c_{1}=0.2506\) and \(c_{2}=-0.0993\). Since the residuals vary widely with diameter we also compute the \(p\) value of the fit, assuming the null hypothesis that the slope is zero. We find \(p=2.21\cdot 10^{-13}\), hence our fit is significant. This effect is illustrated graphically in Fig (4). Outside these bounds, the mean absolute residual varies in a more stochastic fashion due to decreasing sample size. While we do observe that the model becomes less effective at predicting albedo for objects of small diameters, even our smallest asteroids show improvement compared to an approach where the asteroid's dynamical state is not taken into account. This implies that albedos of most small main belt asteroids are likely still significantly correlated with their proper elements. This result increases our confidence that the model can generalize to even very small asteroids (\(\approx 1\)km) across the main belt.
### The Role of Asteroid Families
Given the homogeneity in albedo among asteroid families, it becomes natural to ask how much of our predictive power comes from correctly modeling the albedos of different families versus fitting subtle trends that extend across the belt. To test this we divide the asteroids into families using the Asteroid Families Portal classification scheme (Milani et al., 2014; Knezevic et al., 2014), and compute residuals independently for each. The results of this test are displayed in Fig (5). These residuals show that asteroid families do have significantly smaller residuals than background asteroids. However, there is still significant improvement in the residuals of background asteroids when compared to a naive prediction that assigns all asteroids the average albedo of the dataset. These background asteroids
Figure 4: We illustrate the increase of mean absolute error among asteroids with small diameters. The top panel shows the absolute error vs the \(\log_{10}(D)\) for each of the asteroids in our dataset (grey points) and a binned average of that error (red step function). The behavior over \(D<20\)km is modeled by the linear function shown here as a dashed green line. The horizontal dashed line is the mean absolute error over the entire dataset (0.182), whereas the horizontal solid line is the mean absolute error if the predictions were taken to be a simple mean of the asteroid albedos (without accounting for their orbital elements) ( 0.291). The neural network decreases the mean absolute error by just over 37 %. The bottom panel shows a histogram of the grey points, there are very few asteroids in the sample with diameters of more than 20km, and we ignore such bodies in our linear fit.
have highly inhomogeneous albedos relative to those in families. Hence, the efficacy of the neural net ensemble isn't solely due to correctly predicting the albedos of homogeneous families, but that it also effectively fits subtle trends in the highly inhomogeneous background asteroids of the main belt.
Finally, in addition to being generalizable, our model smooths the underlying structure observed by NEOWISE and reveals subtle structure in the belt. For example, as shown in Figure (6), homogeneity in albedo is not limited strictly to asteroid families. Asteroids located near identified families - but not in them - often share similar albedos with asteroids in those families, behavior would be expected if these families form by collisional disruption of large asteroids (Marzari et al., 1995; Bottke et al., 2005; Milani et al., 2014; Spoto et al., 2015). For example, the Ergone family (near \(a=2.4\), \(\sin(I)=0.09\) in Figure 6) exists within a much larger dynamical grouping of low albedo asteroids. While this tendency does not affect the quality of our fits, as they are agnostic to the family classification, it does beg for explanation. It seems likely that the Asteroid Families Portal family classification is incomplete, or that certain collisional families overlap with larger dynamical families of similar properties. This might imply that larger numbers of asteroids are members of families than currently thought. This is consistent with previous work from (Broz et al., 2013; Carruba et al., 2013; Carruba, 2013; Carruba et al., 2022) which find 'halos' of asteroids with similar properties but are often not found by traditional hierarchical clustering methods.
It's also worth examining how our results extend to features of the belt we have not trained on. One such test is to examine or predictions over the \(z_{2}\) resonance near the Ergone family. The \(z_{2}\) resonance is defined by a commensurability of the precession frequencies of the asteroids contained in it and Saturn. In this case \(2\dot{\omega}+\dot{\Omega}=2\dot{\omega}_{6}+\dot{\Omega}_{6}\) where \(\dot{\omega}\) and \(\dot{\Omega}\) are the precessional frequencies of the asteroids and \(\dot{\omega}_{6}\) and \(\dot{\Omega}_{6}\) are those of Saturn, in this case taken from (Carruba & Michtchenko, 2009). We pick
Figure 5: In the left panel, we show the residuals for the neural net for all the asteroids in our sample (all bodies), the residuals for those in recognized asteroid families (families) and those not known to belong to an asteroid family (background) and the residuals for a prediction based on the average albedo. We contrast them with a naive prediction that simply assigns the average albedo to every asteroid regardless of its dynamical state. We can see that the residuals from the neural net ensemble are significantly smaller than that of the naive prediction. In the right panel, we show the distribution of \(\sigma_{belt}\) for asteroids in and outside families. We recover the relative homogeneity of albedo in asteroid families compared to that of the background belt. Note the small displacement of the residuals from zero comes from the errors being given in terms of the albedo itself, rather than its log. The transformation between these causes a small asymmetry in the error which shows up here as a small shift away from zero.
out asteroids near the \(z_{2}\) secular resonance by considering only those asteroids with semi-major axes between 2.3 AU and 2.45 AU, \(e\approx 0.2\) and \(\sin(i)\approx 0.1\). When points are plotted in the space of their semi-major axis vs relevant precessional frequency, the secular resonance corresponding to the Ergone family appears as a horizontal line and can be see in Figure 6. We can now compare our predictions to the ground truth. We select a band of 1.0"/yr about \(\dot{\omega}_{6}\) and \(\dot{\Omega}_{6}\) and consider all the asteroids in this band as being in the secular resonance. The average of the observed albedos in this region is \(p_{V}=0.07\) whereas the average predicted albedo is slightly larger at \(p_{V}=0.08\). This suggests our model generalizes rather well even to situations where it has not been trained. However, as can be readily seen in Figure 6, contamination from nearby sources is significant and hence the diversity in the albedo distribution in secular resonances may be overestimated due to this contamination.
## 4 Conclusion
In this paper, we developed a neural network based model for predicting asteroid albedos from their proper elements. Our model predicts asteroid albedos significantly more accurately than the naive approach of assuming an average albedo for the entire belt, lowering the mean absolute error by around 37% compared to that approach. We find our model's predictive power isn't limited to asteroid families with homogeneous albedos but also fits weaker trends over the more diverse background asteroids in the main belt. As a consequence of this fitting, the model also uncovers a large amount of structure in the asteroid belt, with many of these structures correlating with known asteroid families, with the relationship between these regions and the asteroid families a subject for future work. This modeling may prove invaluable as future surveys discover more asteroids for which albedo measurements are not otherwise available.
## 5 Acknowledgements
We are grateful to Matt Holman for helpful discussions. This publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a joint project
Figure 6: We show \(\log_{10}(p_{V})\) asteroid families in the left panel, and the background of asteroids not in families in the right panel, shown in the plane of \(\sin(i)\) and \(a\) proper orbital elements. We can see the classified families only comprise a portion of the regions with similar albedos.
of the Jet Propulsion Laboratory/California Institute of Technology and the University of Arizona. NEOWISE is funded by the National Aeronautics and Space Administration. Finally, we are thankful for the input of the anonymous reviewers, whose feedback and suggestions significantly improved this manuscript.
## 6 Data Availability
In addition to our neural net ensemble, we also produce a catalog created by using our trained model on the full set of asteroids with proper elements given by the Asteroid Families Portal. The Asteroid Families Portal contains 585,174 unique numbered main belt asteroids with recorded proper elements and predicted albedos. Hence this catalog extends the NEOWISE albedos by nearly a factor of five. This catalog includes entries for the mean neural net prediction, \(\sigma_{belt}\) and \(\sigma_{pred}\) for both the visible and infrared albedos and is available doi: 10.5281/zenodo.7796841. The weights used by the neural net are available on GitHub via [https://github.com/r-zachary-murray/Asteroid-Albedos](https://github.com/r-zachary-murray/Asteroid-Albedos).
|
2302.13259 | Can we avoid Double Descent in Deep Neural Networks? | Finding the optimal size of deep learning models is very actual and of broad
impact, especially in energy-saving schemes. Very recently, an unexpected
phenomenon, the ``double descent'', has caught the attention of the deep
learning community. As the model's size grows, the performance gets first
worse, and then goes back to improving. It raises serious questions about the
optimal model's size to maintain high generalization: the model needs to be
sufficiently over-parametrized, but adding too many parameters wastes training
resources. Is it possible to find, in an efficient way, the best trade-off? Our
work shows that the double descent phenomenon is potentially avoidable with
proper conditioning of the learning problem, but a final answer is yet to be
found. We empirically observe that there is hope to dodge the double descent in
complex scenarios with proper regularization, as a simple $\ell_2$
regularization is already positively contributing to such a perspective. | Victor Quétu, Enzo Tartaglione | 2023-02-26T08:12:28Z | http://arxiv.org/abs/2302.13259v4 | # Can We Avoid Double Descent in Deep Neural Networks?
###### Abstract
Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the "double descent", has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse, and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off?
Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple \(\ell_{2}\) regularization is already positively contributing to such a perspective.
Victor Queu Enzo Tartaglione LTCI, Telecom Paris, Institut Polytechnique de Paris Double descent, pruning, deep learning, regularization
## 1 The Race towards Generalization
In the recent few years, the research community has witnessed a race towards achieving higher and higher performance (in terms of an error on unseen data), proposing very large architectures like, for example, transformers [1]. From big architectures come big responsibilities: learning strategies to avoid the over-fitting urge to be developed.
The most straightforward approach would be to provide more data: deep learning methods are notoriously data hunger. Since they typically optimize some objective function through gradient descent, having more data in the training set helps the optimization process in selecting the most appropriate set of features (to oversimplify, the most recurrent ones). This allows us to have high performance on unseen data. Such an approach has the big drawbacks of requiring enormous computational power for training and, most importantly, large annotated datasets. While undertaking the first drawback is an actual research topic [2, 3], the second is broadly explored with approaches like transfer learning [4] or self-supervised learning [5].
In more realistic cases, large datasets are typically unavailable, and in those cases approaches working with small data, in the context of _frugal AI_, need to be employed. This poses research questions on how to enlarge the available datasets or to transfer knowledge from similar tasks; however it poses also questions on how to optimally dimension the deep learning model to be trained. Contrarily to what is expected from the bias-variance trade-off, the phenomenon of _double descent_ can be observed in a very over-parameterized network: given some optimal set of parameters for the model \(\mathbf{w}^{opt}\) with loss value \(\mathcal{L}^{opt}\), adding more parameters will worse the performance until a local maximum \(\mathcal{L}^{*}\) beyond which, adding even more parameters, the trend goes back decreasing. This phenomenon, named _double descent_[6] is displayed in Fig. 1, and is consistently reported in the literature [7, 8]. Double descent poses the serious problem of finding the best set of parameters, in order not to fall into an over-parametrized
Figure 1: The double descent phenomenon (dashed line): is it possible to constrain the learning problem to a minimum such that the loss, in over-parametrized regimes, remains close to \(\mathcal{L}^{opt}\) (continuous line)?
(or under-parametrized) regime. The possible approaches to tackle this are two: finding \(\mathbf{w}^{opt}\), which is a problem requiring a lot of computation, or extremely over-sizing the model. Unfortunately, both the roads are not compatible with a frugal setup: is there a solution to this problem?
In this work, we show that the double descent phenomenon is potentially avoidable. Having a sufficiently large regularization on the model's parameters drives the deep models in a configuration where the set of parameters in excess \(\mathbf{w}^{exc}\) are essentially producing no perturbation on the output of the model. Nevertheless, as opposed to Nakkiran et al. [9], who showed in regression tasks that such a regularization could help in dodging double descent, we observe that, in classification tasks, this regularization is insufficient in complex scenarios: an ingredient is still missing, although we are on the right path.
## 2 Double descent and its implications
**Double descent in machine learning models.** The double descent phenomenon has been highlighted in various machine learning models, like decision trees, random features [10], linear regression [11] and deep neural networks [12]. Based on the calculation of the precise limit of the excess risk under the high dimensional framework where the training sample size, the dimension of data, and the dimension of random features tend to infinity proportionally, Meng et al. [10] demonstrate that the risk curves of double random feature models can exhibit double and even multiple descents. The double descent risk curve was proposed to qualitatively describe the out-of-sample prediction accuracy of variably-parameterized machine learning models. Muthukumar et al. [11] provide a precise mathematical analysis for the shape of this curve in two simple data models with the least squares/least norm predictor. By defining the effective model complexity, Nakkiran et al. [13] showed that the double descent phenomenon is not limited to varying the model size, but it is also observed as a function of training time or epochs and also identified certain regimes were increasing the number of train samples hurts test performance.
**Double descent in regression tasks.** It has been recently shown that, for certain linear regression models with isotropic data distribution, optimally-tuned \(\ell_{2}\) regularization can achieve monotonic test performance as either the sample size or the model size is grown. Nakkiran et al. [9] demonstrated it analytically and established that optimally-tuned \(\ell_{2}\) regularization can mitigate double descent for general models, including neural networks like Convolutional Neural Networks. Endorsing such a result, Yilmaz et al. [12] indicated that regularization-wise double descent can be explained as a superposition of bias-variance trade-offs on different features of the data (for a linear model) or parts of the neural network and that double descent can be eliminated by scaling the regularization strengths accordingly.
**Double descent for classification tasks.** Although many signs of progress in regression models have been done, in classification tasks, the problem of avoiding, or formally characterizing, the double descent phenomenon, is much harder to tackle. The test error of standard deep networks, like the ResNet architecture, trained on standard image classification datasets, consistently follows a double descent curve both when there is label noise (CIFAR-10) and without any label noise (CIFAR-100) [12]. Double descent of pruned models concerning the number of original model parameters has been studied by Chang et al, which reveals a double descent behavior also in model pruning [14]. Model-wise, the double descent phenomenon has been studied a lot under the spectrum of over-parametrization: a recent work also confirmed that sparsification via network pruning can cause double descent in presence of noisy labels [15]. He et al. [15] proposed a novel learning distance interpretation, where they observe a correlation between the model's configurations before and after training with the sparse double descent curve, emphasizing the flatness of the minimum reached after optimization. Our work differs from this study by enlightening some cases in which the double descent phenomenon is not evident. We show, in particular, that by imposing some constraints on the learning process, we can avoid the double descent. The experimental setup follows the same as He et al.'s.
## 3 DODGING THE DOUBLE descent
**A regularization-based approach.** In the previous section we have presented all the main contributions achieved in the literature around the double descent phenomenon. It is a known result that, given an over-parametrized model, without special constraints on the training, this phenomenon can be consistently observed. However, it is also known that, for an optimally parametrized model, double descent should not occur. Let us say, the target, optimal output for the model is \(\mathbf{y}_{opt}\). For instance, there is some subset \(\mathbf{w}_{exc}\in\mathbf{w}\) of parameters belonging to the model which are in excess, namely the ones contributing to the double descent phenomenon. Since these are not essential in the learning/inference steps, they can be considered as _noisy parameters_, which deteriorate the
performance of the whole model and make the whole learning problem more difficult. They generate a perturbation in the output of the model which we can quantify as
\[\mathbf{y}^{exc}=\sum_{w_{i}\in\mathbf{w}^{exc}}\text{ forward}\left[\phi(\mathbf{x}_{i}\cdot w_{i})\right], \tag{1}\]
where \(\mathbf{x}_{i}\) is the input(s) processed by \(w_{i}\), \(\phi(\cdot)\) is some non-linear activation for which \(\phi(z)\approx z\) when \(z\to 0\), and \(\text{forward}(\cdot)\) simply forward-propagates the signal to the output of the neural network. As the output of the model is given by \(\mathbf{y}=\mathbf{y}^{opt}+\mathbf{y}^{exc}\), in the optimal case we would require that \(\mathbf{y}^{exc}=\mathbf{0}\). To satisfy such a condition, we have two possible scenarios:
* \(\exists w_{i}\neq 0\in\mathbf{w}^{exc}\). In this case, the optimizer finds a minimum loss such that the algebraic sum of the noisy contribution is zero. When a subset of these is removed, it is possible that \(\mathbf{y}^{exc}\neq\mathbf{0}\), which results in a performance loss and, for instance, the double descent phenomenon is observed.
* \(w_{i}=0,\forall w_{i}\in\mathbf{w}^{exc}\). In this case, there is no contribution of the noisy terms and we are in the optimal scenario, where the parameters are de-facto removed from the model.
Let us focus on the second case. For numerical reasons, this scenario is unrealistic during continuous optimization; hence we can achieve a similar outcome by satisfying two conditions:
1. \(w_{i}x_{i}\approx 0,\forall w_{i}\in\mathbf{w}^{exc}\);
2. \(\|\text{forward}\left[\phi(\mathbf{x}_{i}\cdot w_{i})\right]\|_{1}\leq\|\phi(\mathbf{x }_{i}\cdot w_{i})\|_{1}\), with \(\|\phi(\mathbf{x}_{i}\cdot w_{i})\|_{1}\approx 0\).
We can achieve these conditions with a sufficiently large regularization on the parameters \(\mathbf{w}\). The first condition is achievable employing any common weight penalty, as we assume that, for local minima of the loss where \(\frac{\partial L}{\partial w_{i}}=0\), we have some weight penalty \(C\) pushing its magnitude towards zero. The second condition, on the contrary, requires more careful consideration. Indeed, for the propriety of the function's composability, we need to ensure that the activation function in every layer does not amplify the signal (true in the most common scenarios) and that all the parameters have the lowest magnitude possible. For this reason, ideally, the regulation to employ should be \(\ell_{\infty}\); on the other hand, however, we are also required to enforce some sparsity. Towards this end, recent works in the field suggest \(\ell_{2}\) regulation is a fair compromise[16]. Hence, we need a sufficiently large \(\ell_{2}\) regularization.
**How can we observe we have avoided double descent?** We present an algorithm to sketch the (eventual) double descent phenomenon in Alg. 1. After training for the first time the model on the learning task \(\Xi\), eventually with \(\ell_{2}\) regularization weighted by \(\lambda\) (line 2), a magnitude pruning stage is set up (line 4). Neural network pruning, whose goal is to reduce a large network to a smaller one without altering accuracy, removes irrelevant weights, filters, or other structures from neural networks. An unstructured pruning method called magnitude-based pruning, popularized by [16], adopted a process in which weights, below some specific threshold \(T\), are pruned (line 4). We highlight that more complex pruning approaches exist, but magnitude-based pruning shows their competitiveness despite very low complexity [17]. Towards this end, the hyper-parameter \(T^{iter}\) sets the relative pruning percentage, or in other words, how many parameters will be removed at every pruning stage. Once pruned, the accuracy of the model typically decreases. To recover perfor
Figure 2: Test accuracy in the function of sparsity with different amounts of symmetric noise \(\varepsilon\). Dashed lines correspond to vanilla and solids lines to \(\ell_{2}\) regularization.
**Left:** LeNet-300-100 on MNIST. **Middle:** ResNet-18 on CIFAR-10. **Right:** ResNet-18 on CIFAR-100.
mance, the lottery ticket rewinding technique, proposed by Frankle & Carbin [18], is used. It consists of retraining the subset of parameters that are surviving the pruning stage to their initialization value (line 5) and then training the model (line 6). This approach allows us to state whether a lowly-parametrized model from initialization can, in the best case, learn a target task. We end our sketching once we reach a sparsity higher or equal than \(T^{end}\) (line 3).
**Experimental setup.** For the experimental setup, we follow the same approach as He et al. [15]. The first model we train is a LeNet-300-100 on MNIST, for 200 epochs, optimized with SGD with a fixed learning rate of 0.1. The second model is a ResNet-18, trained on CIFAR-10 & CIFAR-100, for 160 epochs, optimized with SGD, having momentum 0.9 and a learning rate of 0.1 decayed by a factor 0.1 at milestones 80 and 120. For each dataset, a percentage \(\varepsilon\) of symmetric, noisy labels are introduced: the labels of a given proportion of training samples are flipped to one of the other class labels, selected with equal probability [19]. In our experiments, we test with \(\varepsilon\in\{10\%,20\%,50\%\}\). When the regularization \(\ell_{2}\) is employed, we set \(\lambda\) to \(1\times 10^{-4}\). In all experiments, we use batch sizes of 128 samples, \(T^{iter}\) to 20% and \(T^{end}\) is 99.9%.
**Results.** Fig. 2 displays our results. As in He et al. [15] work, looking at LeNet-300-100 with \(\varepsilon=10\%\), the double descent consists of 4 phases. First, at low sparsities, the network is over-parameterized, thus pruned network can still reach similar accuracy to the dense model. The second phase is a phase near the interpolation threshold, where training accuracy is going to drop, and test accuracy is about to first decrease and then increase as sparsity grows. The third phase is located at high sparsities, where test accuracy is rising. The final phase happens when both training and test accuracy drop significantly. However, while we can observe the double descent in the test accuracy without \(\ell_{2}\) regularization, the phenomenon fades when the regularization is added. Indeed, the existence of the second phase is questioned: the test accuracy which is expected to decrease in this phase reaches a plateau before rising when regularization is added. In this simple setup, the double descent is here dodged. However, Fig. 2 also portrays the result of ResNet-18 on CIFAR-10 and CIFAR-100 experiences, with different percentages of noisy labels. Whether the regularization is used or not, on average, and for every value of \(\varepsilon\), the double descent phenomenon occurs and is present in both cases. Those experiments, which can be considered more complex than the previous one, highlight some limits of the use of the standard regularization to avoid the double descent phenomenon, and suggest that a specific regularizer should be designed.
**Ablation on \(\boldsymbol{\lambda}\).** In the previous experiments in Fig. 2, we have proposed solutions with a \(\ell_{2}\)-regularization hyper-parameter which provides a good trade-off between performance in terms of validation error and avoidance of the double descent. However, ablating the regularization parameter is a matter of interest to ensure that with larger values, the double descent can not be avoided either. Hence, we propose in Fig. 3 an ablation study on \(\lambda\) for CIFAR-10 with \(\varepsilon=10\%\). We observe that, even for extremely high \(\lambda\), the double descent is not dodged, and the overall performance of the model drops: the imposed regularization becomes too high, and the training set is not entirely learned anymore. This indicates us that, while for regression tasks \(\ell_{2}\) regularization is the key to dodge the double descent, in classification tasks the learning scenario can be very different, and some extra ingredient is still missing.
## 4 Is Double Descent Avoidable?
The problem of finding the best-fitting set of parameters for deep neural networks, which has evident implications for both theoretical and applicative cases, is currently a subject of great interest in the research community. In particular, the phenomenon of double descent prioritizes the research around finding the optimal size for deep neural networks: if a model is not extremely over-parametrized it may fall into a sub-optimal local minimum, harming the generalization performance.
In this paper, we have moved some first steps, in a traditional classification setup, towards avoiding the double descent. If we successfully achieve a local minimum where, regardless of its over-parametrization, the performance of the model is consistently high at varying the cardinality of its parameters, there would not be a strict need in finding the optimal subset of parameters for the trained model. Standard regularization
Figure 3: Train and test loss at varying \(\lambda\) for CIFAR-10, with \(\varepsilon=10\%\).
approaches like \(\ell_{2}\), which have proven their effectiveness in regression tasks, evidenced some limits in more complex scenarios while dodging effectively the double descent in simpler setups. This result gives us hope: a custom regularization towards avoidance of double descent can be designed, and will be the subject of future research.
|
2307.12941 | On Privileged and Convergent Bases in Neural Network Representations | In this study, we investigate whether the representations learned by neural
networks possess a privileged and convergent basis. Specifically, we examine
the significance of feature directions represented by individual neurons.
First, we establish that arbitrary rotations of neural representations cannot
be inverted (unlike linear networks), indicating that they do not exhibit
complete rotational invariance. Subsequently, we explore the possibility of
multiple bases achieving identical performance. To do this, we compare the
bases of networks trained with the same parameters but with varying random
initializations. Our study reveals two findings: (1) Even in wide networks such
as WideResNets, neural networks do not converge to a unique basis; (2) Basis
correlation increases significantly when a few early layers of the network are
frozen identically.
Furthermore, we analyze Linear Mode Connectivity, which has been studied as a
measure of basis correlation. Our findings give evidence that while Linear Mode
Connectivity improves with increased network width, this improvement is not due
to an increase in basis correlation. | Davis Brown, Nikhil Vyas, Yamini Bansal | 2023-07-24T17:11:39Z | http://arxiv.org/abs/2307.12941v1 | # On Privileged and Convergent Bases in Neural Network Representations
###### Abstract
In this study, we investigate whether the representations learned by neural networks possess a privileged and convergent basis. Specifically, we examine the significance of feature directions represented by individual neurons. First, we establish that arbitrary rotations of neural representations cannot be inverted (unlike linear networks), indicating that they do not exhibit complete rotational invariance. Subsequently, we explore the possibility of multiple bases achieving identical performance. To do this, we compare the bases of networks trained with the same parameters but with varying random initializations. Our study reveals two findings: (1) Even in wide networks such as WideResNets, neural networks do not converge to a unique basis; (2) Basis correlation increases significantly when a few early layers of the network are frozen identically.
Furthermore, we analyze Linear Mode Connectivity, which has been studied as a measure of basis correlation. Our findings give evidence that while Linear Mode Connectivity improves with increased network width, this improvement is not due to an increase in basis correlation.
## 1 Introduction
While neural networks are black-box function approximators that are trained end-to-end to optimize a loss objective, their emergent internal layer-wise representations are important objects for both _understanding deep learning_ and _direct downstream use_. Internal representations of neural networks can be useful tools for interpretability [4, 5, 24], teaching us how neural networks perform the computations they do, as well as for understanding the implicit biases of gradient-based neural network training [1, 2]. Moreover, representations are often directly used for downstream tasks that the network was not originally trained for, like in transfer learning or representation learning. Thus, we would like to develop a better understanding of the mathematical properties of neural network representations.
One such property is whether neural networks representations have a _privileged basis_[10]. That is, are the features represented by each individual neuron significant, or is information stored only at a population level in neurons? This question is important, for instance, in interpretability, where attempts have been made to interpret features represented by individual neurons (such as edge or curve detectors in convolutional networks [5]). This question is also closely related to
that of _invariances_ exhibited by neural representations -- what are the set of transformations that can be applied to representations that keep the final network accuracy unchanged? In particular, if the representations are rotation invariant, then an individual neuron does not carry significant information.
To understand this further, consider the simple case of a two-layer neural network without any non-linear activation functions. That is, the function output of the network is \(f(x)=W_{2}W_{1}x\) with weights \(W_{2},W_{1}\) and inputs \(x\). Here, the first layer representations are \(W_{1}x\). This representation exhibits rotation invariance - we can rotate the first layer representations by an arbitrary orthonormal matrix \(O\) (giving us rotated first layer representations \(OW_{1}x\)) but the subsequent layer can invert this rotation and recover the original function \(f(x)=W_{2}O^{-1}OW_{1}x\). Thus, an individual neuron could represent any arbitrary feature for the same functional outputs in the network.
**Our Contributions:** In our work, we start by showing that the perceived permutation invariance of representations at high width is actually a result of a of kind of noise-averaging -- the correlation between the activities of neurons after accounting for permutations remains _nearly_ constant, as we scale the width (Section 3). This shows that while metrics like linear mode connectivity may suggest permutation invariance, the effect disappears when examined at a neuron level. Since this casts some doubt on the presence of a privileged basis, in Section 4 we ask if _any_ basis of neural representations is likely to work equally well. To do so, we consider a random rotation of a layer, and ask if it can be inverted by the later layers with training. We find that this is not the case. Thus, combined these results suggest that while the basis of neural representations matter, there is no one unique basis that is required to achieve the same functional accuracy. Finally, in Section 5, we ask what kinds of constraints can be imposed on the network to obtain a unique neural basis consistent across different training runs.
## 2 Related Work
**Convergent learning.** Also referred to as _universality_, convergent learning is the conjecture that different deep learning models learn very similar representations when trained on similar data [20, 24]. Much of the work in mechanistic interpretability [9, 23, 24, 25] has leveraged the universality conjecture to motivate research for toy models, with the hope that the methods and interpretations developed for these more tractable models will scale to larger and more capable models. Recently, [6] examined universality by reverse-engineering a toy transformer model for a group composition task. Attempts to test for convergent learning include _representation dissimilarity_ comparisons, notably neuron alignment [14, 19, 20] and correlation analysis / centered kernel alignment [17, 22].
Model stitching [2, 18] extracts features from the early layers of model \(f\) and inserts them into the later layers of model \(g\) (usually via a learned, low-capacity connecting layer \(\varphi\)). If the representations between these models can be combined such that the resulting'stitched' model, \(g_{>l}\circ\varphi\circ f_{\leq l}\), achieves a low loss on a downstream task, the models are called'stitching connected' for the layer \(l\) for that task.
**Linear Mode Connectivity:** It has been conjectured in [1, 12, 15] that, for different models learned by SGD with equal loss, once the permutation symmetries of neural networks are taken into consideration, linear interpolations between them of the form \(\theta_{\alpha}=(1-\alpha)\theta_{1}+\alpha\theta_{2}\) for \(0<\alpha<1\) have at least constant loss.
**Privileged Basis:** It is often taken for granted in the interpretability literature that the activation basis is _privileged_, at least in layers with elementwise operations (namely nonlinearities)
[3, 13, 28, 29, 30]. On the other hand, while the residual stream of transformer models has no obvious elementwise operation and is thus not an obviously privileged basis, [7] provides evidence for outlier basis-aligned features. To study this phenomenon, [11] demonstrate that transformers can learn rotationally invariant representations in their residual stream using a similar procedure to what we describe in Section 4. We ask the complementary question of whether layers with nonlinearities can learn in an arbitrary basis.
## 3 SGD Basis and Linear Mode Connectivity
In this section we explore whether two neural networks with different random initializations converge to the same basis. In other words, are the two neural networks the same up to a permutation of neurons per layer? It is clear that this will hold for an infinite width network since at infinite width there is no 'randomness' in initialization. We are interested in answering whether it holds for large yet feasible widths. Recently [1, 12, 15] have studied the weaker claim of if different neural networks are linear mode connected after an appropriate permutation to the neurons per layer. Specifically they study the loss/error barrier when interpolating1 between the two networks (after permutation). We will call this barrier as perm-LMC barrier. In [1, 15] it was found that for networks trained with SGD on CIFAR-10, the perm-LMC barrier become very small (\(<2\%\) error barrier) for large yet feasible widths. On the other hand, for ImageNet trained models, while the barrier decreases with width, it remains quite large. There are at least two reasons for why perm-LMC barrier could decrease with width:
Footnote 1: Following Jordan et al. [15], we reset batch norm statistics in all of the experiments.
1. The bases of two trained networks becomes closer with larger width.
2. LMC improves with width even if the two networks do not become closer in their basis with width.
To answer this question, we need a measure of how close two networks are to being permutations of each other. Let \(\text{Perm}_{n}\) be the set of permutations over \(n\) elements. For a layer with \(n\) neurons and activations \(X^{1}\) and \(X^{2}\) of the two networks, we use \(\max_{p\in\text{Perm}_{n}}\sum_{i\in[n]}\operatorname{Cov}(X^{1}_{i},X^{2}_{ p(i)})\) which we call permutation-correlation (perm-corr)2. This is also the measure used by Li et al. [19] and Jordan et al. [15] to find an appropriate permutation for calculating the permutation-LMC barrier.
Footnote 2: For all experiments we will report values of perm-corr averaged across all the layers.
**perm-corr is nearly constant across width:** In Figure 1 (left) we use the exact setup of Jordan et al. [15] and plot perm-corr and perm-LMC barrier (between two networks trained with independent initializations and batch orderings) as a function of width where we use Resnet20 as the base network and Resnet20-32x as the widest one. perm-corr remains almost constant with width. This suggests that we are very far from widths needed for neurons basis to be close i.e. perm-corr to be near 1. As in prior works we do find that perm-LMC barrier goes down with width. But combined these results suggest that **perm-LMC barrier goes down with width because linear mode connectivity becomes more feasible at high width rather than due to a better match between the basis of neurons**. We now explore this further.
The previous experiment already verified that basis correlation (operationalized by perm-corr) does not improve with width. We now verify that linear mode connectivity, even without an increase
in similarity of basis, improves with width. To do so, we consider the LMC barrier3 between two networks without permuting the neurons.
Footnote 3: Jordan et al. [15] did not reset batch norm statistics for this experiment and hence did not observe this improvement with width.
In Figure 1 (middle) we find that this quantity (green) improves significantly with width, adding support to our hypothesis. We can now ask why width has this effect. Intuitively, the LMC barrier for two uncorrelated networks (or even networks with some fixed amount of correlation) improves if the networks are robust to noise. This is because being robust to noise means that we can treat the other network as noise and maintain the performance of the first network. Figure 1 (middle) we plot (orange) the performance of a network formed by averaging a trained and a random network (with same weight norms). We find that the accuracy drop (compared to the trained network) of this network also improves with width adding support to our hypothesis. Together these experiments suggest that that while perm-LMC barrier going down with width is an interesting empirical finding in its own right, it overestimates the similarity of neurons across different networks and in particular is driven by other factors such as robustness to noise to a large extent.
### Change in Permutation-Correlation due to Training
Another approach to think about the arising of a unique basis is to see how to training process affects perm-corr i.e., does perm-corr improve due to training? In Figure 1 (right) we plot perm-corr for a trained and random network and find that two _random_ networks have higher perm-corr then for two trained network, for all but the narrowest width networks. This suggests that there is a large variance in the basis of neurons that are found in a trained network. This argues against convergent learning in neural networks from the perspective of the basis of neurons.
## 4 Can Networks Learn Rotationally Invariant Representations?
In the last section we saw the neuron basis for networks trained from independent initialization are not aligned (compared to the baseline of two randomly initialized networks), i.e. there is no unique
Figure 1: Do randomly initialized neural networks converge to the same basis? (**left**) perm-LMC barrier drops with width but perm-corr is nearly constant. (**middle**) LMC barrier (without permutation) also drops with width. (**right**) perm-corr does not improve through training for most widths.
basis. This raises the following natural question: Do all bases lead to good representations? Specifically, in this section, we ask if the representations learnt by neural networks can be made rotation invariant. We build on the linear example considered in the introduction, where any orthonormal transformation of the representation can be inverted by the following layer successfully.
To test this in non-linear networks trained with gradient descent, we take a Myrtle-CNN trained with SGD on CIFAR-10 trained to 92% accuracy. Let's say the network is \(f(x)=\sigma(A_{l}(\sigma(A_{l-1}(...))))\), where \(A_{l}\) denotes the pre-activation of layer \(l\) and \(\sigma\) denotes the non-linearity. Then, we perform the following procedure: We sample a random orthonormal matrix \(O_{1}\) and multiply it with the _pre-activations_ of the first layer \(O_{1}A_{1}\). Then, we freeze this rotated layer, and retrain all the remaining layers of the network on the same training dataset. Next, we apply a random orthonormal matrix to the second layer \(O_{2}A_{2}\), freeze the first two layers and retrain all layers \(l>2\). We repeat this procedure successively for all the layers in the network. Thus, \(f_{l,\text{rotated}}(x)=\sigma(A_{l}(\sigma(O_{l-1}A_{l-1}(\sigma(O_{l-2}A_{l- 2}(...)))))\).
Figure 4 shows the resulting error when we freeze up to \(l\) layers. We find that retraining _cannot_ invert this random rotation and that the network accuracy degrades significantly. We also observe that the error gets worse for later layers in the network. For comparison, we also plot the error of a network with random weights (with the same distribution as the initialization) for the first \(l\) layers and show that the increase in error from performing random rotations is similar to _random features_-not training the layers at all! For reference, we repeat the above procedure but we only freeze one layer at a time (Figure 4). We also observe a significant increase in the error of the network, which gets worse as we go deeper into the network. Our findings highlight that the basis of the network are not in fact rotation invariant. This suggests that the directions represented by individual neurons are in fact signifcant, and we cannot use an arbitrary basis for the network.
## 5 The residual stream might enable convergent bases
Finally, we ask if there are other empirically common choices that may enable the basis to be fixed in certain ways. We highlight the candidate of a _residual stream_. In transformer networks, for instance,
Figure 2: _Random feature_ vs _recovering random rotation_ performance for Myrtle CNN models trained on CIFAR-10. **(left)** successive freezing, rotating, and retraining; **(right)** single layer freezing and rotating.
there is a largely linear residual connection that runs from the input (after the embedding layer) to the remainder of the network [9]. If the residual stream is indeed important to the computations performed by the network, it may impose a basis on the network that roughly tries to align with the input embedding. For models that share the same tokenizer or embedding layer, this might suggest that we can combine networks with residual streams _as is_ -- without even having to compute permutations to match the neurons to perform symmetry correction [1].
However, prior work [2] has shown that the early layers of a vision network can be replaced with random features without significant loss in performance. We see in Figure 2 that the first few layers of the network are relatively more resistant to random rotations. This suggests that even if the residual connections were providing a force for the basis to align, the residual connections would align to the basis computed by these random transformations of the early layers.
Thus, we experiment with freezing early blocks of the network. First, we train a model \(A\) with \(n\) layers. Then, we train a new model \(B\) with the first \(l\) frozen layers of \(A\), training the \(n-l\) top-layers from scratch with a new random initialization. The perm-corr is computed between layers, Figure 3 shows our results for a ConvNeXt [21] model trained on CIFAR-10 (left) and a Vision Transformer (ViT) [8] trained on ImageNet (right). We also measure the identity stitching penalty, where we evaluate the performance of pairs models stitched with the _identity function_\(\varphi=\mathrm{id}\), i.e. models of the form \(g_{>l}\circ f_{\leq l}\).
We find that freezing only 2 blocks is sufficient to have a significantly higher perm-corr (like-wise, lower error for identity stitching Figure 3 (middle)) in ConvNeXt models and ViTs. This convergence phenomenon is considered across different model widths and for different residual stream structures in Appendix A.
Together, these results suggest a relatively cheap procedure for _fixing the basis_ when training different neural networks, an (as noted) desirable property for interpretability research and also relevant for modular and distributed neural network training [16, 26]. For example, a related phenomenon for networks fine-tuned from the same base model was used in [27] to create a combined model more accurate than its constituent models.
Figure 3: Enforcing a consistent basis by freezing \(l\) early layers. (**left**) perm-corr for ConvNeXt models trained on CIFAR-10 (**middle**) identity stitching CIFAR-10 ConvNeXt models (**right**) perm-corr for ViT models trained on ImageNet
## 6 Discussion and conclusion
We find that, in some important respects, linear mode connectivity overstates the similarity of neurons across runs. Our results suggest that while the LMC barrier improves with network width, it can in part be explained by factors beyond similarity. On the other hand, we find strong evidence that for neural networks, only certain bases (namely, those that are not rotation invariant) lead to good representations. Finally, we provide a straightforward procedure to enable a convergent basis; this is desirable for both interpretability and modular training.
## Acknowledgements
NV is supported by a Simons Investigator Fellowship, NSF grant DMS-2134157, DARPA grant W911NF2010021, and DOE grant DE-SC0022199.
|
2308.15673 | MDTD: A Multi Domain Trojan Detector for Deep Neural Networks | Machine learning models that use deep neural networks (DNNs) are vulnerable
to backdoor attacks. An adversary carrying out a backdoor attack embeds a
predefined perturbation called a trigger into a small subset of input samples
and trains the DNN such that the presence of the trigger in the input results
in an adversary-desired output class. Such adversarial retraining however needs
to ensure that outputs for inputs without the trigger remain unaffected and
provide high classification accuracy on clean samples. In this paper, we
propose MDTD, a Multi-Domain Trojan Detector for DNNs, which detects inputs
containing a Trojan trigger at testing time. MDTD does not require knowledge of
trigger-embedding strategy of the attacker and can be applied to a pre-trained
DNN model with image, audio, or graph-based inputs. MDTD leverages an insight
that input samples containing a Trojan trigger are located relatively farther
away from a decision boundary than clean samples. MDTD estimates the distance
to a decision boundary using adversarial learning methods and uses this
distance to infer whether a test-time input sample is Trojaned or not. We
evaluate MDTD against state-of-the-art Trojan detection methods across five
widely used image-based datasets: CIFAR100, CIFAR10, GTSRB, SVHN, and
Flowers102; four graph-based datasets: AIDS, WinMal, Toxicant, and COLLAB; and
the SpeechCommand audio dataset. MDTD effectively identifies samples that
contain different types of Trojan triggers. We evaluate MDTD against adaptive
attacks where an adversary trains a robust DNN to increase (decrease) distance
of benign (Trojan) inputs from a decision boundary. | Arezoo Rajabi, Surudhi Asokraj, Fengqing Jiang, Luyao Niu, Bhaskar Ramasubramanian, Jim Ritcey, Radha Poovendran | 2023-08-30T00:03:03Z | http://arxiv.org/abs/2308.15673v2 | # MDTD: A Multi-Domain Trojan Detector for
###### Abstract.
Machine learning models that use deep neural networks (DNNs) are vulnerable to backdoor attacks. An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN such that the presence of the trigger in the input results in an adversary-desired output class. Such adversarial retraining however needs to ensure that outputs for inputs without the trigger remain unaffected and provide high classification accuracy on clean samples. Existing defenses against backdoor attacks are computationally expensive, and their success has been demonstrated primarily on image-based inputs. The increasing popularity of deploying pretrained DNNs to reduce costs of re/training large models makes defense mechanisms that aim to detect'suspicious' input samples preferable.
In this paper, we propose _MDTD_, a Multi-Domain Trojan Detector for DNNs, which detects inputs containing a Trojan trigger at testing time. MDTD does not require knowledge of trigger-embedding strategy of the attacker and can be applied to a pre-trained DNN model with image, audio, or graph-based inputs. MDTD leverages an insight that input samples containing a Trojan trigger are located relatively farther away from a decision boundary than clean samples. MDTD estimates the distance to a decision boundary using adversarial learning methods and uses this distance to infer whether a test-time input sample is Trojaned or not.
We evaluate MDTD against state-of-the-art Trojan detection methods across _f_ve widely used image-based datasets- CIFAR10, CIFAR10, GTSRB, SVHN, and Flowers102, _four_ graph-based datasets-AIDS, WinMal, Toxicant, and COLLAB, and the SpeechCommand audio dataset. Our results show that MDTD effectively identifies samples that contain different types of Trojan triggers. We further evaluate MDTD against adaptive attacks where an adversary trains a robust DNN to increase (decrease) distance of benign (Trojan) inputs from a decision boundary. Although such training by the adversary reduces the detection rate of MDTD, this is accomplished at the expense of reducing classification accuracy or adversary success rate, thus rendering the resulting model unfit for use.
+
Footnote †: [leftmargin=*]
that output labels corresponding to clean inputs (inputs without the trigger) remain unaffected. A DNN model that misclassifies inputs that contain a trigger is termed _Trojaned_. Backdoor attacks are highly effective when only classification output labels of models are available to the users. In (Liu et al., 2018), DNN models used in autonomous driving applications were shown to incorrectly identify a STOP sign with a small sticker on it (the trigger) as a'speed limit' sign. Backdoor attacks with different types of triggers embedded in inputs from the CIFAR10 dataset are illustrated in Fig. 1.
Defenses against backdoor attacks involve (i) pruning or re-training the DNN (e.g., _Fine-Pruning_(Zhu et al., 2017)), (ii) developing Trojan model detectors (e.g., _Neural Cleanse_(Zhu et al., 2017)) to detect an embedded backdoor, or (iii) detecting Trojan samples at inference (Zhu et al., 2017) or training time (Zhu et al., 2017). Performance of these methods were examined and reported to be computationally expensive in (Zhu et al., 2017) (Sec. VIII). These methods also assume that the user of the model has adequate resources to (re)train the DNN model (Zhu et al., 2017; Zhu et al., 2017) or identify and reconstruct an embedded trigger (Zhu et al., 2017). These challenges underpin a need to develop mechanisms to effectively detect input samples that have been embedded with a Trojan trigger and discard such inputs before they can be provided to a DNN model.
In this paper, we propose _MDTD_, a mechanism to detect Trojan samples that uses a distance-based reasoning to overcome the challenges listed above. MDTD is a computationally inexpensive inference-time detection mechanism that can be applied to pre-trained DNN models. MDTD does not aim to inspect and remove an embedded backdoor from the given DNN model; rather, it determines if an input sample to the given DNN contains a trigger with high probability, and discards such inputs. The effectiveness of MDTD is underscored by the fact that it is agnostic to the specific trigger-embedding strategy employed by the adversary.
MDTD uses the insight that samples containing a Trojan trigger will typically be located farther away from the decision boundary compared to a clean sample. Consequently, a larger magnitude of noise will need to be added to a Trojaned sample to move it across the decision boundary so that it will be misclassified by the DNN. Since it makes use of the distance metric from a decision boundary, MDTD is applicable to different input data modalities.
We illustrate the insight and motivation behind MDTD using t-SNE visualization techniques (Zhu et al., 2017) to demonstrate that embedding of a Trojan trigger can be qualitatively examined through the lens of feature values at intermediate DNN layers. To quantitatively verify this insight, we compare distances of clean and Trojan samples to a decision boundary using the notion of a certified radius (Liu et al., 2018). However, computing the certified radius can incur large costs (Liu et al., 2018). To overcome this challenge, MDTD estimates the distance of Trojan and clean samples from a decision boundary using adversarial learning (Liu et al., 2018) in a computationally efficient manner. MDTD then determines a threshold on the computed distance using only a small number of clean samples without making any assumption about the adversary's trigger-embedding strategy. Our contributions are:
* We propose _MDTD_, a practical framework to detect Trojaned inputs in image, graph, and audio-based input domains in a computationally inexpensive manner.
* We demonstrate the effectiveness of _MDTD_ through comprehensive evaluations and comparisons with state-of-the-art (SOTA) Trojan input detection methods for different types of Trojan triggers across _five_ image-based input datasets: **CIFAR100, CIFAR10, GTSRB, SVHN, and Flowers102**.
* We examine the performance of _MDTD_ on _four_ graph-based input domains: **AIDS, WinMal, Toxicant, and COLLAB** and _one_ audio dataset **SpeechCommand**.
* We evaluate MDTD against adaptive Trojan trigger-embedding attacks where the adversary has complete knowledge about the details of MDTD-based detection and aims to construct new Trojan trigger embeddings to bypass detection. We empirically show that although such retraining of the DNN reduces the detection rate of MDTD, it simultaneously lowers classification accuracy of the DNN on clean samples below 50%, thus making the DNN unfit for use.
Sec. 2 provides background on DNNs and backdoor attacks. Sec. 3 describes our threat model, and Sec. 4 specifies assumptions on user capability. We motivate and describe the design of _MDTD_ in Sec. 5. We evaluate _MDTD_ in Sec. 6 and discuss MDTD in Sec. 7. Sec. 8 presents related work and Sec. 9 concludes the paper.
## 2. Preliminaries
This section gives an overview of deep neural networks (DNNs), graph neural networks (GNNs), and long-short-term-memory (LSTM) models. We describe how an adversary can carry out a backdoor attack on these models, and specify metrics to evaluate effectiveness of the attack and our proposed defense _MDTD_.
### DNNs and Backdoor Attacks
Deep neural networks (DNNs) are complex machine learning models (ML) developed for tasks with high-dimensional input spaces (e.g., image classification, text generation) (Krizhevsky et al., 2014). These models take an input \(x\), compose \(x\) through layers \((f(x):=I_{1}\circ_{2}\circ\cdots\circ_{k}(x))\), and return output \(y\). For e.g., in image classification, \(x\in[0,1]^{(W\times H)}\) is an image, and the DNN returns an output \(y\in\{1,\cdots,C\}\) where \(W\times H\) is the resolution of \(x\) and \(C\) is the number of classes.
DNN models for classification tasks are known to be vulnerable to backdoor attacks (Zhu et al., 2017). An adversary carrying out a backdoor attack can retrain a DNN to return a different output label when a trigger is embedded into the input sample, while recognizing clean samples correctly. An adversary can carry out a backdoor attack by embedding triggers into a subset of samples in the training data (Krizhevsky et al., 2014; Liu et al., 2018) or manipulating weights in layers of the DNN (Zhu et al., 2017) to induce erroneous behavior at test time for inputs that contain the trigger. For e.g., in image classification using the CIFAR10 dataset
Figure 1. (Top Row) Different types of Trojan triggers that we examine for inputs from the CIFAR10 dataset that consists of 10 classes. (Bottom Row) Image of a bird (_Class 2_) embedded with the trigger. After the DNN identifies the bird embedded with a trigger as a frog (_Class 6_).
(Fig. 1), the adversary manipulates the model \(f\) to return a desired label corresponding to _frog_ (_Class 6_) that is different from the true label of _bird_ (_Class 2_) for inputs that contain a predefined trigger.
### GNNs and Backdoor Attacks
Graph neural networks (GNNs) are a class of deep learning models designed to make inferences on graph-based data (Krizhevsky et al., 2014). The input to a GNN is a graph \(\mathcal{G}=(V,E)\) where \(V\) is the set of individual nodes, and \(E\) is the set of edges between pairs of nodes. Each node \(v\in V\) has an associated set of \(d\) features, denoted \(x_{v}\in\mathbb{R}^{d}\). We let \(X\in\mathbb{R}^{|V|\times d}\) be the feature representation matrix associated with graph \(\mathcal{G}\). In this paper, we focus on a recently proposed backdoor attack on GNNs that use a message passing paradigm (Krizhevsky et al., 2014), and graph classification tasks where the goal is to predict the class that an input graph belongs to. Under the message passing paradigm, at each iteration \(t\), \(x_{v}\) is updated as follows: \(x_{v}^{(t)}=\mathcal{U}(x_{v}^{(t-1)},\mathcal{A}(x_{v}^{(t-1)},\forall u\in \mathcal{N}(v)))\), where \(\mathcal{N}(v)\) is the set of neighbours of \(v\), \(\mathcal{A}\) is an aggregate function that takes the feature representations of each node from \(v\)'s neighbors as input, and \(\mathcal{U}\) is an update function that takes \(x_{v}\) at iteration \((t-1)\) and the output of the aggregate function \(\mathcal{A}\) as inputs. After \(T\) iterations, individual node representations are pooled to generate a graph representation \(x_{\mathcal{G}}=f(\mathcal{G},X)\). The graph classification task can then be expressed as \(h:f(\cdot,\cdot)\rightarrow\{1,2,\ldots,C\}\).
An attacker carrying out a backdoor attack on GNNs uses a subgraph (a subset of vertices and edges associated to vertices) of \(\mathcal{G}\) as the trigger. We adopt the Graph Trojan Attack (GTA) proposed in (Zhu et al., 2017) to generate Trojaned GNN models. GTA uses trigger-embedded graphs to update GNN parameters. The updated GNN model is passed to a trigger generation network which generates the trigger for the next iteration of the message passing procedure. The trigger generation network consists of a topology generator that updates the subgraph, and a feature generator that updates features associated to nodes in the Trojaned subgraph. The goal of the adversary is to ensure that the Trojaned GNN model returns a desired label \(y^{d}\) that is different from the true label for graph inputs that are embedded with the predefined 'triggered' subgraph.
### Audio Models and Backdoor Attacks
Long-Short-Term-Memory (LSTM) models enable processing and reasoning about sequences of information, e.g., in audio and speech processing (Zhu et al., 2017). We use an LSTM combined with a DNN model to train an audio classifier. The model takes an audio input \(x^{1\times H}\), and returns an output \(y\in\{1,\cdots,C\}\), where \(1\times H\) is the resolution of \(x\) and \(C\) is the number of classes. An audio input is comprised of a set of frequencies, which could be different for each input.
For a sample audio \(x\), its Trojaned version \(x_{T}\) can be generated using two different backdoor attacks: (i) Modified, where a small part of the audio was replaced by an arbitrary (but fixed) audio noise pattern, and (ii) Blend, where a randomly generated (but fixed) audio noise trigger was mixed into a part of the audio. Specifically, at time-interval \([i,i+w]\), \(x_{T}[i:i+w]=(1-\alpha)\times x[i:i+w]+\alpha\times Trigger\), where \(\alpha=1\) for Modified, and \(\alpha\in[0.05,0.2]\) for Blend.
### Metrics
We describe metrics used to evaluate backdoor attacks and Trojan sample detection methods. We assume that defense mechanisms return a _positive_ label if they identify a sample as Trojan. In the literature (Krizhevsky et al., 2014), such positive identification of the Trojan is called a true positive rate. SImilarly, when the defense mechanism incorrectly labels a clean sample as Trojan, it is called a false positive or false alarm. We now define suitable metrics below:
_True Positive Rate (TPR)_ is the fraction of Trojan samples that received a positive label.
_False Positive Rate (TPR)_ is the fraction of clean samples that incorrectly received a positive label (raising a _false alarm_).
_\(F_{1}\)-score_:_ An effective Trojan detection method has a high detection accuracy (TPR) and low false alarm (FPR). Defining \(TNR=1-FPR\), the \(F_{1}\)-score combines \(TPR\) and \(TNR\) as:
\[F_{1}=\frac{2*TPR*TNR}{TPR+TNR}. \tag{1}\]
The \(F_{1}\)-score is widely used to compare Trojan detection methods, and a higher \(F_{1}\)-score indicates a better detector (Cheng et al., 2017).
_Attack Success Rate (ASR)_ is the fraction of Trojan samples that result in the DNN model returning the attacker's desired output (Krizhevsky et al., 2014).
## 3. Threat Model
In this section, we introduce the threat model we consider, and describe our assumptions on capabilities of the attacker.
_Adversary Assumptions_: We assume that the adversary has access to a pretrained model and adequate data is available such that a subset of these samples can be used by the adversary to embed a Trojan trigger into them. The adversary is also assumed to have sufficient computational resources required to (re)train the DNN using both, the Trojan trigger-embedded and clean samples.
_Adversary Goals and Actions_: The adversary's aim is to use Trojan-embedded as well as clean samples to train a Trojaned DNN model such that: (i) for Trojan trigger-embedded inputs, the output of the DNN is an adversary-desired target class, which may be different from the true class, and (ii) for clean inputs, the classification accuracy is as close to the accuracy of the un-Trojaned DNN.
_Performance Metrics_: Performance metrics defined in Sec. 2.4, including the true/ false positive rates, and the attack success rate (ASR) can be computed by the adversary after it trains the Trojaned DNN model. We use these metrics to characterize and empirically evaluate the effectiveness of an attack. The objective of the adversary is typically to ensure that the value of ASR is high, while the \(F_{1}\)-score in our context, which quantifies the defender's ability to detect a sample containing a Trojan trigger is as small as possible.
The adversary is assumed to perform trigger embedding in a stealthy manner such that its specific attack strategy and parameters, including the nature and location of the Trojan triggers embedded in the sample, as well as information about data samples that have been embedded with Trojan triggers is not revealed.
## 4. User Capability
The user has limited computational resources, and uses ML platforms to train a model or uses publicly available models which have high accuracy on clean samples. However, the user has sufficient resources to use an adversarial learning method (Krizhevsky et al., 2014) to obtain the magnitude of adversarial noise that will result in misclassification of an input sample. We further assume that the user is aware of the
possibility that the input sample or the model can be Trojaned, and aims to detect and remove malicious inputs. However, the user has no knowledge of the attacker strategy, and identities of the Trojan trigger and target class. The user is assumed to have access to a set of clean samples whose labels are known. The user has either _white-box access_ (access to weights and hyperparameters of the DNN model) or _black-box access_ (access to only outputs of the DNN model). We note that developing solutions for black-box model access is more difficult than for the white-box setting, where information about the DNN model is available. For both settings, our objective is to develop a method that would aid a user in identifying and discarding input samples that contain a Trojan trigger.
## 5. MDTD: Motivation and Design
In this section, we describe the mechanism of _MDTD_. We motivate the development of _MDTD_ by showing that feature values at intermediate layers of the DNN corresponding to clean and Trojaned samples are different. We hypothesize that as a consequence, clean and Trojaned samples will behave differently when perturbed with noise, and use the notion of a certified radius (Krizhevsky et al., 2017) to verify this. We then explain how _MDTD_ computes an estimate of the certified radius to effectively distinguish between Trojan and clean samples.
### Key Intuition behind MDTD
**Feature Value Visualization using t-SNE:** For a DNN model with \(K\) layers, the first (\(K-1\)) layers map an input \(x\) to the _feature space_, which typically has a lower dimension than the high-dimensional input space. The last layer of the DNN then uses the values in the
Figure 2. This figure shows t-SNE visualizations for outputs of the penultimate layer of the DNN for 200 clean samples (blue dots) from the target class (_Class_ \(\theta\)) and 200 Trojaned samples (red dots) misclassified to the target class. We examine the CIFAR10, CIFAR100, GTSRB, SVHN, and Flower102 datasets, and **six Trojan triggers**(Badnets, Blend, Nature, Trojan SQ, Trojan WM and L2 inv) for each dataset. In most cases, we observe that although the DNN classifies both clean and Trojaned samples to the same class (_Class_ \(\theta\)), it generates different values for features in its penultimate layer. Embedding a trigger into samples from CIFAR10 and GTSRB using the L2 inv trigger is relatively less easy (compare separability of blue and red dots in Col 6, Row 2 and Col 6, Row 3).
feature space to make a decision about the input. For example, in an image classification task, the decision will be the identity of the class that the input is presumed to belong to. Thus, the output of the penultimate layer (layer \(K-1\)) of the DNN can then be interpreted as an indicator of the _perspective_ of the DNN about the given input sample.
We use a t-distributed stochastic neighbor embedding (t-SNE) (Zhu et al., 2017) to demonstrate that for the same output of the DNN model, values in the feature space for clean and Trojan samples are different. The t-SNE is a technique to visualize high-dimensional data through a two or three dimensional representation. For the CIFAR10, CIFAR100, GTSRB, SVHN, and Flowers102 datasets, we collect 200 samples corresponding to each of six different Trojan trigger types that are classified by the DNN model as belonging to the class \(y^{d}=6\) due to the presence of the trigger. We additionally collect 200 clean input samples that do not contain a Trojan trigger for whom the output of the DNN model is \(y=y^{d}\).
Fig. 2 shows t-SNE representations of the feature values of these samples, i.e., the outputs at the penultimate layer of the DNN. We observe that the clean samples (blue dots) can be easily distinguished from Trojan samples (red dots) for each of the Trojan trigger type in all five datasets. While the t-SNE visualization provides qualitative indicators of clean and Trojaned sample behavior, we are also interested in quantitative metrics. We use the certified radius to distinguish between clean and Trojaned input samples.
**Certified Radius:** The certified radius is defined (Krause et al., 2017) as the radius of the largest ball centered at each input sample within which the DNN model returns the same output label. The certified radius is computed by estimating the distance to a decision boundary by perturbing samples with Gaussian noise with a predetermined/known mean and variance. However, exactly computing the certified robustness has a high computational cost (Krause et al., 2017). Instead, we evaluate the robustness of Trojan-embedded samples by approximately computing the certified radius using a heuristic that perturbs a small number of clean and Trojaned samples with Gaussian noise centered at the input sample of interest.
Table 1 presents average certified radii for 200 clean and 200 Trojan samples with six different Trojan trigger types (Badnets, Blend, Nature, Trojan SQ, Trojan WM and L2 inv) applied on CIFAR10, CIFAR100, GTSRB, SVHN and Flowers102 datasets. We observe that the certified radius is significantly higher for Trojaned samples than for clean samples, clearly indicating a relatively larger distance to the decision boundary. Consequently, the minimum magnitude of noise required to make the DNN misclassify a Trojan sample will be larger than the noise required to misclassify a clean sample.
Although t-SNE visualizations and certified radius computation provide preliminary insights into differences between clean and Trojan samples, there are two significant challenges which limit their direct use in Trojan trigger detection. First, while t-SNE provides visual insights, it relies on access to (i) adequate number of clean and Trojan trigger-embedded samples, and (ii) outputs at intermediate layers of the DNN. However, t-SNE visualizations are not useful as a direct computational mechanism for Trojan detection since _the user of a DNN model will typically not have access to adequate number Trojan trigger-embedded samples_. Further, access to intermediate layers is not feasible for users who only have black-box model access (i.e., access to only outputs of the model). Second, computing the certified radius is computationally expensive, and is known to be NP-complete (Krause et al., 2017), which limits their use in practice. We next present a two-stage algorithmic design for MDTD that leverages insights from t-SNE visualizations and certified radii.
### Two-Stage Design of MDTD
_MDTD_ makes use of adversarial learning techniques (Krause et al., 2017) to estimate distance of a sample from any decision boundary. It then computes the smallest magnitude of adversarial noise required to misclassify the sample to infer whether the sample is Trojaned or not. _MDTD_ consists of two stages. In the first stage, MDTD estimates the distance of a given input sample to the decision boundary. In the second stage, distances estimated in the previous step and the distance of a small number of clean samples to the decision boundary are used to identify whether the sample contains a Trojan trigger or not. We describe each stage below.
**Stage 1: Estimating distance to decision boundary:**_MDTD_ uses adversarial learning techniques (Krause et al., 2017; Wang et al., 2018) to estimate the minimum magnitude of noise perturbation, denoted \(\delta\), that will cause the DNN model to misclassify a given sample. For an input sample \(x\), the output of the DNN model is denoted \(f(x)\). The value of the smallest perturbation \(\delta\) is obtained by maximizing a loss function \(\mathcal{L}(\cdot,\cdot)\) that quantifies the difference between the output label of the DNN for a perturbed variant of the input, denoted \(f(x+\delta)\), and the true label of the input \(y\). This objective is well-defined and can be expressed as a regularized optimization problem with regularization constant \(\lambda\) as (Krause et al., 2017; Wang et al., 2018):
\[\min_{\delta}-\mathcal{L}(f(x+\delta),y)+\lambda\|\delta\|, \tag{2}\]
In computing the value of \(\delta\), we need to consider two types of user access to the DNN model, namely _white-box_ access and _black-box access_. For a user with white-box access, we can use the Fast Gradient Sign Method (FGSM) and Iterative Fast Gradient Sign Method (IFGSM) to solve Eqn. (2) due to their low computational
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Clean & **Badnets** & **Blend** & **Nature** & **Trojan SQ** & **Trojan WM** & **L2 inv** \\ \hline CIFAR100 & 0.005 (0.045) & 0.657 (0.132) & 0.103 (0.131) & 0.880 (0.017) & 0.759 (0.047) & 0.893 (0.0001) & 0.596 (0.036) \\ \hline CIFAR10 & 0.015 (0.046) & 0.875 (0.035) & 0.308 (0.216) & 0.893 (0.000) & 0.893 (0.0001) & 0.893 (0.0001) & 0.648 (0.059) \\ \hline GTSRB & 0.308 (0.326) & 0.663 (0.234) & 0.749 (0.227) & 0.882 (0.068) & 0.857 (0.106) & 0.499 (0.235) & 0.239 (0.148) \\ \hline SVHN & 0.255 (0.268) & 0.727 (0.178) & 0.651 (0.271) & 0.887 (0.035) & 0.879 (0.083) & 0.882 (0.062) & 0.893 (0.0001) \\ \hline Flower102 & 0.038(0.173) & 0.189(0.163) & 0.634(0.302) & 0.875(0.07) & 0.893(0.001) & 0.893(0.001) & 0.269(0.148) \\ \hline \end{tabular}
\end{table}
Table 1. This table reports average (standard deviation) of certified radii for 200 clean and 200 Trojan samples for the CIFAR10, CIFAR100, GTSRB, Flower102 datasets with **six different Trojan triggers** (noted in **bold** column titles). We observe that Trojan samples have a higher average certified radius than clean samples. Thus, the minimum magnitude of noise required to make a DNN model misclassify an input sample containing a Trojan trigger is larger.
cost (Kalalinin et al., 2017; Li et al., 2018). When a user has only black-box access to the DNN model, we apply a SOTA adversarial learning method called HopSkipJump (Hong et al., 2017) which allows us to estimate the minimum magnitude of noise \(\delta\) required for misclassification of the input.
**Stage 2: Outlier detection:** Following our assumptions on user capability described in Sec. 3, information about the identity of the Trojan trigger and the target output class for Trojan samples is not available. Hence, the user will not be able to generate Trojan samples and estimate the distance of these samples to the decision boundary. However, due to the widespread availability of datasets, we assume that the user has access to a limited number of clean samples. We denote this set as \(D_{user}=\{(x_{1},y_{1}),(x_{2},y_{2}),\cdots,(x_{N},y_{N})\}\). The user is assumed to have the ability to use the methods in _Stage 1_ above to estimate the minimum magnitudes of noise \(\{\delta_{1},\delta_{2},\cdots,\delta_{N}\}\) required to misclassify these samples.
In order to determine whether a given input sample contains a Trojan trigger, we use an outlier detection technique first proposed in (Zhou et al., 2017). This method assumes that the distances of a clean sample (non-outliers) to the decision boundary follows a Gaussian distribution \(\delta\sim\mathcal{N}(\mu,\sigma^{2})\). _MDTD_ estimates the values of \(\mu\) and \(\sigma\) using the values of \(\{\delta_{1},\delta_{2},\cdots,\delta_{N}\}\) determined from the set \(D_{user}\). Then, for a threshold \(\alpha\) on the maximum tolerable false positive rate, any sample whose distance to the decision boundary satisfies \(|\delta-\mu|>\alpha\sigma\) will be identified as containing a Trojan trigger (outlier). A small value of \(\alpha\) results in a lower rate of detection of Trojan samples; a large value results in more clean samples being incorrectly identified as Trojan.
**Choice of \(\alpha\):** We show how to choose \(\alpha\), given an upper bound \(\gamma\) for a user of the DNN model, depending on the size of the set \(D_{user}\). When the size of \(D_{user}\) is sufficiently large, \(\alpha\) can be expressed using the tail distribution of a standard Gaussian \(Q(\cdot)\) and the complementary error function \(\operatorname{erfc}(\cdot)\)(Brandt, 2010). With \(\mu\) and \(\sigma^{2}\) denoting sample mean and sample variance of entries in \(D_{user}\), and \(Q(\alpha)=\frac{1}{2}\operatorname{erfc}(\frac{\alpha}{\sqrt{2}})\) and \(\operatorname{erfc}(z)=\frac{2}{\sqrt{\pi}}\int_{z}^{\infty}e^{-t^{2}} \operatorname{d}t\), we choose \(\alpha\) such that \(2Q(\alpha)\leq\gamma\), which gives the minimum value of \(\alpha\) as:
\[\alpha=\sqrt{2}\operatorname{erfc}^{-1}(\gamma). \tag{3}\]
If the size of data set \(D_{user}\) (denoted \(N\)) is quite small, then the sample mean \(\mu\) can be estimated using a \(t\)-distribution with \(\nu=N-1\) degrees of freedom (Kalinin et al., 2017). For a user-defined \(\gamma\), we denote the _critical \(t\)-value_ as \(T_{(1-\gamma/2),\nu}\). This represents the \((1-\frac{\gamma}{2})\) quantile of the \(t\)-distribution. In order to satisfy the maximum tolerable false positive rate, the parameter \(\alpha\) will need to satisfy (Kalinin et al., 2017):
\[\alpha\sigma\geq T_{(1-\gamma/2),\nu}\frac{\mu}{\sqrt{N}}\Rightarrow\alpha=T_ {(1-\gamma/2),\nu}\frac{\mu}{\sigma\sqrt{N}}. \tag{4}\]
The \(t\)-distribution approximates a Gaussian as \(N\) becomes large (Kalinin et al., 2017). In our experiments, we use the Gaussian in Eqn. (3) when \(N>30\), and the \(t\)-distribution in Eqn. (4) otherwise, as suggested in (Zhou et al., 2017).
While Eqns. (3) and (4) provide a mathematical characterization of the threshold on the maximum false positive rate, this threshold can also be empirically determined using ROC curves, which provide a graphical relationship between true and false positive rates for varying values of \(\alpha\) (Figs. 3 and 4 of our evaluations in Sec. 6). We also provide an upper bound on the worst-case false positive rate (false alarm) of MDTD in Appendix A.
## 6. MDTD: Evaluation Results
In this section we evaluate and compare _MDTD_ against four SOTA Trojan detection methods on image, graph, and audio-based input datasets. Our evaluation of MDTD uses exact network structures, parameters, training algorithms reported in the literature (Kalinin et al., 2017; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). For each case, we provide a brief overview of the datasets, describe our experimental setups, and present our results. We use metrics introduced in Sec. 2.4 to evaluate the performance of MDTD. Our code is available at [https://github.com/rajabia/MDTD](https://github.com/rajabia/MDTD).
### Image Inputs
**Datasets:** We consider the following five datasets: CIFAR10 (Krizhevsky et al., 2015), CIFAR100 (Krizhevsky et al., 2015), SVHN (Krizhevsky et al., 2015), GTSRB (Krizhevsky et al., 2015), and Flower102 (Krizhevsky et al., 2015). The CIFAR10 and CIFAR100 datasets each consist of 60000 color images that belong to one of 10 or 100 classes respectively. SVHN contains 600000 images of house numbers obtained from Google Street View. GTSRB is a dataset containing 52000 images of traffic signs, and Flower102 contains images of 102 common flowers in the UK. In all our experiments, we use an image resolution of 32\(\times\)32, and partition the dataset into 80% for training and 20% for test. For each dataset, we train one clean and six Trojan models with different trigger types (see Fig. 1). We experimentally verified that classification accuracy and attack success rate was not affected by the adversary's choice of target class. Consequently, we set the adversary-desired target class when carrying out a backdoor attack as \(y^{d}=6\).
**DNN Structure:** For CIFAR100, CIFAR10, and Flowers102, we used WideResnet DNNs (Zhu et al., 2017). For GTSRB and SVHN, we used a DNN with 4 convolutional layers with kernel sizes 32, 64 and 64 respectively followed by a maxpooling layer and a fully connected layer of size 4096. We train models for 100 epochs with batch size of 64 using a stochastic gradient descent (SGD) optimizer. We tuned the model and set the learning rate to 0.001 and momentum to 0.9.
**Trojan triggers:** We consider six different Trojan triggers that an adversary can embed into image inputs provided to the DNN: white colored square (BadNets) (Kalinin et al., 2017), image of a coffee mug (Nature) (Kalinin et al., 2018), 'Hello Kitty' image blended into the background (Blend) (Kalinin et al., 2018), multi-colored square (Trojan SQ) (Krizhevsky et al., 2015), colored circular watermark (Trojan WM) (Krizhevsky et al., 2015), and an 'invisible' trigger based on an \(L2\)-regularization of the input image (I2 inv) (Kalinin et al., 2018). Our choice of triggers and training methods for Trojan-embedded models follow the SOTA (Kalinin et al., 2017; Li et al., 2018; Li et al., 2018).
Table 2 compares classification accuracy (Acc.) at test time of clean samples and attack success rate (ASR) without any defense for samples embedded with six different Trojan triggers. We observe that Acc. values of Trojaned models is comparable to a clean model, while simultaneously achieving high ASR.
**Setup:** Defense against Trojans can be broadly categorized into solutions that (a) modify the supervised training pipeline of DNNs with secure training algorithms, (b) detecting backdoors in DNN models, or (c) detecting and eliminating input samples containing any Trojan trigger. The works in (Kalinin et al., 2017; Li et al., 2018; Li et al., 2018) belong to (a), (Li et al., 2018; Li et al., 2018) belong to (b), and (Li et al., 2018; Li et al., 2018; Li et al., 2018) belong to (c). Our MDTD also belongs to category (c). Hence, we evaluate MDTD against four similar SOTA Trojan detection methods that also aim to detect and eliminate input samples containing a trigger: (i) DCT-based (Li et al., 2018), ii) STRIP (Kalinin et al., 2018), (iii) spectral signature (Li et al., 2018), and (iv) activation clustering (Hong et al., 2017). We describe each detection method below:
_DCT-based detector:_ This method uses the discrete cosine transform (DCT) to analyze different frequencies present in an image. The authors of [64] showed that clean and Trojan samples consist of signals of different frequencies, which could be used to effectively distinguish between them. We follow experiment settings suggested in [64]- we use the complete set of training samples for each dataset and perturb clean samples by adding the Trojan trigger and Gaussian random noise. However, DCT-based detection requires using the entire training set [64], which is computationally expensive.
_STRIP:_ The authors of [15] demonstrated that inputs containing a Trojan trigger were more robust to noise than clean inputs. Therefore, DNN classifiers will be less likely to change their decisions when these inputs are'mixed' with other clean samples. We follow the setup from [15] in our experiments. We select 20 clean images at random, and'mix' these with each input sample before providing it to the DNN classifier. An input is considered Trojan if the classifier returns the same output for at least 10 of the'mixed' variants of the input, and is considered clean otherwise.
_Spectral Signature [52], Activation Clustering [8]:_ Spectral signature methods [52] use an insight that clean and Trojan samples have different values of covariances of features learned by the DNN model. Activation clustering leverages differences between clean and Trojaned samples can be characterized in terms of the values of DNN 'activations', which represent how the model made a classification decision for a given input sample. Both methods take a set of samples (Trojan and clean) as input and partition the set into clean and suspicious clusters using clustering and feature reduction techniques (e.g., PCA, FastICA). The goal of these methods is to detect and eliminate Trojan samples from the training set in order to prevent embedding of a trigger during model training.
_MDTD: (**OURS**)_ MDTD uses estimates of distances to the decision boundary in order to distinguish between clean and Trojan samples. We consider cases when the user has _white-box_ and _black-box_ access to the DNN model. In the white-box setting, MDTD uses FGSM and IFGSM [17] adversarial learning methods to compute the minimum magnitude of noise \(\delta\) required to misclassify a sample. In the black-box setting, MDTD uses the HopSkipJump adversarial learning method [9] to estimate distances (\(\delta\)) to the decision boundary only based on outputs of the DNN model. Since the user does not have information about the Trojan trigger or target output class, MDTD uses a set of 500 clean samples randomly selected from the training set to determine a threshold distance to the decision boundary. A sample is identified as Trojan if \(\delta\) is beyond this threshold.
**Evaluating MDTD:** Table 3 shows the \(F_{1}\)-scores obtained for images embedded with different types of Trojan triggers when using spectral signature [52], activation clustering [8], a DCT-based detector [64], STRIP [15], and _MDTD_. Recall from Eqn. (1) that the \(F_{1}\)-score is defined as \(F_{1}=\frac{2*TPR*TNR}{TPR*TNR}\), with \(TNR=1-FPR\), where and \(TPR\) and \(FPR\) denote true and false positive rates.
From Table 3, we observe that MDTD achieves the highest \(F_{1}\)-scores in almost all cases. This indicates that MDTD is simultaneously able to achieve high true positive rates (detection) and small false positive rates (false alarm). MDTD obtains a lower \(F_{1}\)-score for only 2 pairs of cases (Badnets and L2 inv Trojan triggers for Flower102 dataset); we elaborate on these cases in Sec. 7.
Compared to MDTD, SOTA methods that analyze input samples-spectral signature [52] and activation clustering [8]- have low \(F_{1}\)-scores due to high false positive rates. DCT-based detectors, on
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Clean} & \multicolumn{3}{c|}{**BadNets**} & \multicolumn{3}{c|}{**Blend**} & \multicolumn{3}{c|}{**Nature**} & \multicolumn{3}{c|}{**Trojan SQ**} & \multicolumn{3}{c|}{**Trojan WM**} & \multicolumn{3}{c|}{**L2 inv**} \\ \cline{2-13} & Acc. & ASR & Acc. & ASR & Acc. & ASR & Acc. & ASR & Acc. & ASR & Acc. & ASR \\ \hline CIFAR100 & 55.69\% & NA & 53.01\% & 95.51\% & 52.30\% & 99.99\% & 53.88\% & 100\% & 53.71\% & 100\% & 53.85\% & 100\% & 51.96\% & 99.98\% \\ \hline CIFAR100 & 82.57\% & NA & 81.18\% & 97.4\% & 81.11\% & 99.95\% & 81.52\% & 99.99\% & 81.69\% & 100\% & 81.63\% & 100\% & 81.46\% & 99.95\% \\ \hline GTSRB & 88.57\% & NA & 84.19\% & 91.91\% & 88.55 & 95.45\% & 87.41\% & 99.98\% & 88.31\% & 99.03\% & 85.24\% & 99.76\% & 87.89\% & 99.06\% \\ \hline SVHN & 89.63\% & NA & 89.46\% & 95.95\% & 99.79\% & 99.28\% & 90.44\% & 99.92\% & 90.26\% & 99.72\% & 90.48\% & 99.84\% & 91.96\% & 99.69\% \\ \hline Flower102 & 50.59\% & NA & 47.25\% & 89.61\% & 46.18\% & 99.12\% & 46.37\% & 100\% & 46.67\% & 100\% & 48.14\% & 100\% & 44.71\% & 97.16\% \\ \hline \end{tabular}
\end{table}
Table 2: This table shows the classification accuracy (Acc.) for clean samples and attack success rate (ASR) of Trojan samples for **six different Trojan triggers** (Badnets, Blend, Nature, Trojan SQ, Trojan WM and L2 inv- noted by **bold** column titles) on five image-based datasets- CIFAR100, CIFAR10, and SVHN, and Flower102 in the absence of any Trojan detection mechanism. We observe that the classification accuracy of Trojaned models and clean models is comparable; however, Trojaned models have high values of attack success rate. Note that the ASR value for the clean model is not defined (NA).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{Trojan} & \multirow{2}{*}{Spec} & \multirow{2}{*}{AC} & \multirow{2}{*}{DCT} & \multicolumn{3}{c|}{**NMDTD**} & \multicolumn{3}{c|}{**MDTD**} \\ & & & & DCT & STRIP & (FGSM) & (HGSM) & (HSSM) \\ \hline \multirow{4}{*}{Trojan} & BadNets & 0.303 & 0.039 & 0.88 & 0.94 & 0.95 & 0.94 & 0.86 \\ \cline{2-10} & Blend & 0.302 & 0.0001 & 0.93 & 0.95 & 0.97 & **0.98** & 0.96 \\ \cline{2-10} & Nature & 0.301 & 0.20 & 0.93 & 0.95 & **0.98** & **0.98** & **0.98** & **0.98** \\ \cline{2-10} & Train & YO & 0.302 & 0.19 & 0.93 & 0.96 & **0.98** & 0.97 & 0.97 \\ \cline{2-10} & Trojan WM & 0.303 & 0.26 & 0.94 & 0.95 & 0.96 & **0.98** & 0.8 & 0.8 \\ \cline{2-10} & 12 inv & 0.302 & 0.0001 & 0.94 & 0.95 & **0.98** & 0.97 & 0.97 \\ \hline \multirow{4}{*}{Trojan} & BadNets & 0.316 & 0.348 & 0.69 & 0.81 & 0.78 & 0.77 & **0.82** \\ \cline{2-10} & Blend & 0.316 & 0.354 & 0.70 & 0.84 & **0.92** & 0.91 & 0.84 \\ \cline{2-10} & Blend & 0.316 & 0.353 & 0.70 & 0.84 & **0.92** & 0.91 & 0.84 \\ \cline{2-10} & Nature & 0.314 & 0.346 & 0.7 & 0.81 & **0.94** & 0.92 & 0.91 \\ \cline{2-10} & Train & YO & 0.315 & 0.533 & 0.7 & 0.84 & 0.91 & **0.93** & **0.93** \\ \hline \multirow{4}{*}{Trojan} & Trajan WM & 0.315 & 0.0002 & 0.7 & 0.83 & 0.91 & 0.91 & **0.93** & **0.93** \\ \cline{2-10} & L2 inv & 0.316 & 0.361 & 0.7 & 0.84 & **0.91** & **0.91** & 0.85 \\ \cline{2-10} & BadNets & 0.304 & 0.345 & 0.83 & 0.79 & 0.88 & **0.9** & 0.82 \\ \cline{2-10} & Blend & 0.304 & 0.335 & 0.71 & 0.74 & 0.75 & **0.83** & 0.76 \\ \cline{2-10} & Nature & 0.304 & 0.334 & 0.84 & 0.76 & 0.76 & **0.9** & 0.76 \\ \cline{2-10} & Trajan SQ & 0.299 & 0.339 & **0.94** & 0.79 & 0.8 & 0.89 & 0.79 \\ \cline{2-10} & Trajan WM & 0.303 & 0.334 & 0.99 & 0.76 & 0.
Figure 3: This figure plots ROC curves showing change in accuracy of Trojan sample detection (True positive) with the change in the maximum tolerable false positive rate \(\alpha\) for _MDTD_ using FGSM, IFGSM, and HopSkipplump adversarial learning methods for **six different types of Trojan triggers**- Badnets, Blend, Nature, Trojan SQ, Trojan WM, and L2 inv- for **five image-based datasets**- CIFAR100, CIFAR10, GTSRB, SVHN, and Flower102. In each case, we observe that the threshold \(\alpha\) for the false positive rate plays a critical role in determining values of the true positive rate. Low values of true positive rates despite higher thresholds \(\alpha\) when samples in the Flowers102 dataset are embedded with a BadNets (white square) or L2 inv Trojan triggers could be because (uniformly) selected clean input samples in this dataset includes white-colored flowers.
the other hand, show large variations in \(F_{1}\)-scores across different image-based datasets. Unsurprisingly, DCT-based detectors have very low false positive rates when the number of frequency components in input images is limited (e.g., input samples from SVHN), but they also have very high false positive rates (\(\sim 100\%\)) when input image samples contain a large number of frequency components (e.g., samples from Flower102). In Appendix B, we report true and false positive rates for each case.
**Effect of \(\alpha\)**: Fig. 3 plots ROC curves showing change in the true positive rate with varying values of the maximum tolerable false positive rate threshold \(\alpha\) for _MDTD_ using FGSM, IFGSM, and Hop-SkipJump adversarial learning methods for six different Trojan triggers for all five image-based input datasets. For most datasets, and across Trojan trigger types, MDTD consistently accomplishes high true positive rates for smaller values of \(\alpha\).
### Graph Inputs
**Datasets**: We consider four graph datasets (Zhou et al., 2017; Zhang et al., 2018):
_AIDS_: This dataset consists 2000 graphs representing molecular compounds which are constructed from the AIDS Antiviral Screen Database of Active Compounds. The chemical structure of compounds is used to identify whether a patient belongs to one of the following three categories: confirmed active (CA), confirmed moderately active (CM), and confirmed inactive (CI).
_WinMal_: This dataset consists of 1361 call graphs of Windows Portable Executable (PE) files. Each file belongs to one of two categories:'malware' or 'goodware'. Individual nodes of a call graph represent a function and edges between nodes are function calls.
_Toxicant_: This dataset captures molecular structures (as a graph) of 10000 compounds studied for their effects of chemical interference in biological pathways. Effects are classified as 'toxic' or 'non-toxic'. _COLLAB_: This is a scientific collaboration dataset of 5000 graphs of geo networks of researchers in 3 fields- High Energy Physics, Condensed Matter Physics, and Astro Physics. The graph classification task is to identify which field an ego network belongs to.
**Graph Network Structure:** We use identical network structures, parameters, and setups from (Krishnan et al., 2017) for our experiments, and adopt the Graph Trojan Attack from (Zhou et al., 2017) to generate Trojaned GNNs.
**Evaluating MDTD:** Table 4 shows true positive rate, false positive rate, and \(F_{1}\)-score for the AIDS, COLLAB, WinMal, and Toxicant datasets. We use MDTD with the FGSM and IFGSM adversarial learning methods to estimate distances of samples to a decision boundary using \(100-500\) clean samples (depending on the size of the dataset), and a threshold of \(\alpha=0.15\) on the false positive rate. We compute the smallest value \(\delta^{*}\in\mathbb{R}^{|V|\times d}\) such that \(h(f_{t}(\mathcal{G},X))\neq h(f_{t}(\mathcal{G},X+\delta^{*}))\). As expected, the \(F_{1}\)-score when using IFGSM is typically higher than when using FGSM due to the iterative nature of the IFGSM adversarial learning technique (Krishnan et al., 2017). For the COLLAB dataset, the \(F_{1}\)-score is 1 since MDTD identifies all input samples containing a Trojan trigger correctly (\(TPR=1\)) and does not raise a false alarm for clean samples (\(FPR=0\)).
Fig. 4 presents ROC curves showing change in the true positive rate for different values of the maximum tolerable false positive rate threshold \(\alpha\) for _MDTD_ using the FGSM and IFGSM adversarial learning methods for the AIDS, COLLAB, WinMal, and Toxicant datasets. Fig. 5 plots representations of feature values of outputs at the penultimate layer of the GNN. We collect 200 clean graph inputs and 200 graph inputs embedded with a Trojan trigger for the four graph-based input datasets. Our experiments reveal that clean samples (blue dots) can be easily distinguished from Trojan samples (red dots) in all four graph datasets.
### Audio Inputs
**Datasets**: We use the SpeechCommand (SC) dataset v.0.02 (Zhou et al., 2017), which contains \(65,000\) audio files. Each file is a one-second audio of one of 35 commands. Since some commands are sparse, similar to (Zhou et al., 2017) and (Zhou et al., 2017), we select files belonging to ten classes ("yes", "no", "up", "down", "left", "right", "on", "off", "stop", "go"). The dataset for our experiments then has 30,769 training and 4,074 test samples.
**Network Structure**: We trained an LSTM for audio classification on the extracted mel-spectrogram of each file, which contains information about frequency components in the file (Zhou et al., 2017; Zhou et al., 2017).
**Evaluating MDTD**: Table 5 presents the classification accuracy (Acc.) for clean samples and attack success rate (ASR) for Trojan samples on the SpeechCommand dataset when no defense is used for two types of Trojan triggers- Modified (Mod) and Blend (Bld). We assume that the user has a small set of clean samples (500) and estimates a threshold on the distance to a decision boundary with \(FPR=15\%\) on this set. We also report false positive rates (FPR) and \(F_{1}\)-scores for MDTD (FGSM) and MDTD (IFGSM) for 500 unseen and 500 Trojan samples. We observe that for both triggers, MDTD is able to simultaneously achieve a low FPR and high \(F_{1}\)-score.
Fig. 6 shows the t-SNE representations of feature values at the penultimate layer of the LSTM-based DNN. We use 1000 clean and 1000 audio samples that have been embedded with a Trojan trigger. We consider two different types of triggers-Modified and Blend- for the SpeechCommand dataset. Our experiments reveal that clean
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Task} & \multicolumn{3}{c|}{MDTD (FGSM)} & \multicolumn{3}{c|}{MDTD (IFGSM)} \\ \cline{2-7} & TPR & FPR & \(F_{1}\) & TPR & FPR & \(F_{1}\) \\ \hline AIDS & 84.07\% & 14.49\% & 0.85 & 96.29\% & 12.88\% & **0.91** \\ \hline WinMal & 96\% & 10.53\% & 0.93 & 100\% & 7.89\% & **0.96** \\ \hline Toxicant & 25.10\% & 16.17\% & 0.39 & 100\% & 12.77\% & **0.93** \\ \hline COLLAB & 100\% & 0\% & **1** & 100\% & 0\% & **1** \\ \hline \end{tabular}
\end{table}
Table 4. This table presents the true positive rate (TPR), False positive rate (FPR), and \(F_{1}\)-score of MDTD for **four graph datasets**- AIDS, WinMal, Toxicant, and COLLAB. MDTD (IFGSM) typically achieves a higher \(F_{1}\)-score (in **bold**) due to the iterative nature of the IFGSM adversarial learning technique. The \(F_{1}\)-score for COLLAB is 1 since MDTD identifies all input samples containing a Trojan trigger correctly (\(TPR=1\)) and does not raise any false alarm when inspection clean samples (\(FPR=0\)).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Trojan} & \multirow{2}{*}{Acc.} & \multicolumn{3}{c|}{MDTD (FGSM)} & \multicolumn{3}{c|}{MDTD (IFGSM)} \\ \cline{3-7} & & ASR & FPR & \(F_{1}\) & FPR & \(F_{1}\) \\ \hline Mod & 94.43\% & 99.95\% & 20.4\% & 0.81 & 22.4\% & 0.80 \\ \hline Bld & 93.31\% & 99.93\% & 20.4\% & 0.81 & 20.4\% & 0.81 \\ \hline \end{tabular}
\end{table}
Table 5. This Table shows the classification accuracy (Acc.) for clean samples and attack success rate (ASR) for Trojan samples with **two different Trojan triggers**- Modified (Mod) and Blend (Bld)- on the SpeechCommand dataset. We also report false positive rates (FPR) and \(F_{1}\)-scores for MDTD (FGSM) and MDTD (IFGSM). We observe that for both triggers, MDTD is able to simultaneously achieve a low FPR and high \(F_{1}\)-score.
samples (blue dots) can be easily distinguished from Trojan samples (red dots) for both types of triggers.
### MDTD vs. Adaptive Adversary
While the threat model presented in Sec. 3 enables consideration of the performance of MDTD with respect to multiple SOTA methods to detect input samples with Trojan-embedded triggers, we now consider a variant of the adversary that is capable of adapting to the two-stage approach of MDTD (Sec. 5.2). Adaptive adversaries have been studied in other contexts, as in (Stein
model has been updated, then the user would run only **Stage 2** of MDTD (Sec. 5.2) when input samples are provided to the model. This would result in a scenario where a false alarm is raised for clean sample inputs (mislabeled as Trojaned). We carry out experiments to verify this hypothesis and present our results in Table 6 above.
We use the robust learning technique proposed in (Zhu et al., 2019). We set the step size to 0.00784 and examine two noise levels (\(\epsilon=0.01,0.1\)). Table 6 shows the classification accuracy, ASR, and \(F_{1}\)-scores for DNN models trained with two levels of adversarial noise perturbations \(\epsilon\) for different types of triggers on the CIFAR10 dataset. Increasing \(\epsilon\) from 0.01 to 0.1 causes a significant drop in the values of \(F_{1}\)-scores of _MDTD_. However, this comes at the cost of reducing classification accuracy to below 50%. Such low classification accuracy reveals to the user that the DNN model is possibly compromised. The user could then choose to discard the model, rendering the adversary's efforts futile. Experiments on a nonimage-input graph dataset (AIDS) presented in Table 7 yields similar results.
The second type of adaptive attack uses the insight that DNN models are known to classify Trojan trigger-embedded input samples more confidently (Hou et al., 2019; Li et al., 2019), which indicates these samples are likely to be further away from a decision boundary (Kumar et al., 2019). The adversary can thus attempt to disrupt MDTD **Stage 1** by moving Trojan samples closer to a decision boundary (Li et al., 2019). The DNN model is then retrained using the modified Trojan samples and a set of clean samples. We use the technique proposed in (Li et al., 2019) to reduce the confidence of the DNN in predicting output labels of Trojan samples. An adversary carrying out the adaptive attack in this setting gives the true (correct) label to a Trojan sample with probability \(p\) and assigns the (adversary-desired) target label otherwise. We examine two values of \(p\) (\(=0.5,0.7\)), and report our results on the CIFAR10 dataset for six different types of triggers. Table 8 shows that increasing the value of \(p\) marginally affects \(F_{1}\)-scores of MDTD. However, the _ASR_ value drops significantly, rendering such an attack impractical.
On the other hand, the user could decide to run both **Stage 1** and **Stage 2** of MDTD using a subset of clean samples to recalibrate parameters of MDTD. Once parameters of MDTD are calibrated, as shown in our previous results in this section, MDTD effectively identifies and discards input samples that contain a Trojan trigger. Repeated retraining by the adversary does not help to improve its performance either. Thus, we conclude that MDTD is agnostic to iterative actions of an adaptive adversary.
## 7. Discussion
**Choice of adversarial learning methods:** As detailed in Sec. 5.2, in the first stage, MDTD estimates distances of samples to a decision boundary; in the second stage, MDTD applies the described outlier detection procedure to identify input samples that contain a Trojan trigger. The choice of adversarial learning method used to estimate the minimum noise required to change the output of the model could impact the performance of MDTD. For example, in Table 3, in the black-box setting, when using a SOTA adversarial learning method HopSkipJump (Hong et al., 2019), an estimate of such noise is expected to be difficult to obtain. However, MDTD performs quite well even in such a scenario, and obtains high \(F_{1}\)-scores. Unsurprisingly, in the white-box setting, when MDTD uses computationally inexpensive adversarial learning techniques such as FGSM and IFGSM (Fidler et al., 2019), access to model parameters results in even higher \(F_{1}\)-scores.
\(F_{1}\)**-score of MDTD and robustness of Trojan samples:** MDTD obtains a lower \(F_{1}\)-score in 2 pairs of cases- observe \(F_{1}\)-scores in Table 3 for Badnets (white square) or L2 inv Trojan triggers in the Flower102 dataset. ROC curves indicate that true positive rates are lower even when selecting a large threshold \(\alpha\) (bottom row of Fig. 3). The Flower102 dataset contains white-colored flowers that are part of clean input samples. On the other hand, as shown in Fig. 1, the Trojan trigger for Badnets is a white square. In this case, overlay the Badnets trigger on top of a white flower makes it difficult to distinguish between clean samples and samples that contain the trigger. This observation is further reinforced in the last row of Table 1 where clean samples from Flower102 and Trojan samples embedded with the BadNets and L2 inv triggers have comparable values of the average certified radius, but with high variance.
**Natural Backdoors:** Our focus so far has been on embedded backdoor attacks in which an adversary embeds a predefined trigger
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(p\) & Attack & Acc. & ASR & \(F_{1}\): MDTD (IFGSM) \\ \hline \multirow{8}{*}{0.5} & BadNets & 80.14\% & 71.03\% & 0.16 \\ \cline{2-5} & Blend & 80.08\% & 75.42\% & 0.80 \\ \cline{2-5} & Nature & 79.27\% & 81.01\% & 0.88 \\ \cline{2-5} & Trojan SQ & 80.48\% & 78.48 & 0.89 \\ \cline{2-5} & Trojan WM & 79.61\% & 78.21\% & 0.93 \\ \cline{2-5} & L2 inv & 78.61\% & 84.63\% & 0.69 \\ \hline \multirow{8}{*}{0.7} & BadNets & 80.03 & 57.95\% & 0.12 \\ \cline{2-5} & Blend & 79.82\% & 65.45\% & 0.82 \\ \cline{2-5} & Nature & 80.43\% & 58.28\% & 0.92 \\ \cline{2-5} & Trojan SQ & 75.75\% & 55.45\% & 0.91 \\ \cline{2-5} & Trojan WM & 79.61\% & 58.40\% & 0.91 \\ \cline{2-5} & L2 inv & 79.84\% & 57.67\% & 0.64 \\ \hline \end{tabular}
\end{table}
Table 8. This table reports \(F_{1}\)-scores of MDTD (IFGSM) for DNN models trained in a way that Trojan samples are assigned their true (correct) label with probability \(p\) and assigned the adversary-desired target label otherwise. We report results for \(p=0.5,0.7\) for the CIFAR10 dataset with **six different Trojan triggers**. Increasing the value of \(p\) from 0.5 to 0.7 results in a marginal change of \(F_{1}\)-score. However, the _ASR_ value drops significantly, rendering the attack impractical for the adversary.
Figure 7. t-SNE visualizations for outputs of the penultimate layer of the DNN for 200 clean samples (blue dots) from the target class (_Class 6_) and 200 Trojaned samples embedded with a _natural backdoor_ (red dots) misclassified to target class for CIFAR100 (_left_) and GTSRB (_right_) datasets. Embedding a Trojan trigger into samples from CIFAR100 is easier than into samples from GTSRB (compare separability of blue and red dots). A result, input samples from the GTSRB dataset that contain a Trojan trigger will not be effectively classified by the DNN model to the adversary-desired target class. Any detection mechanism, therefore, can be expected to have a high false positive rate, where clean samples are (mistakenly) identified as Trojaned.
into inputs given to DNN models. However, an attacker who has knowledge of parameters of the user model, including all decision boundaries, can learn an adversarial noise which can then be used as a _natural backdoor_[50] to ensure that the DNN model output is the adversary-desired label. Such an adversarial noise is also termed a _universal perturbation_, and it was shown in [38] that it was possible for an adversary to learn a universal perturbation and use it to simulate behavior similar to a backdoor attack.
Fig. 7 depicts values in feature space at intermediate layers of DNNs containing natural backdoors for clean input samples and samples containing a Trojan trigger. The t-SNE visualizations for samples from the CIFAR100 (Fig. 7, _left_) and GTSRB (Fig. 7, _right_) datasets reveal that embedding a Trojan trigger into samples from CIFAR100 is easier than into samples from GTSRB (compare separability of blue and red dots in Fig. 7). The case of GTSRB shows that use of natural backdoors may not be always effective as Trojan trigger embedding strategy. Consequently, if clean samples and trigger-embedded samples from GTSRB were to mix, then any detection mechanism can be expected to have a high false positive rate.
**MDTD for Text Models:** Sequence-to-sequence (seq2seq) text models [49] map an input sequence \(x=\{x_{1},\dots,x_{k}\}\) to an output sequence \(y=\{y_{1},\dots,y_{n}\}\), possibly of different length. A backdoor attack on a text model consists of using a specific term or word in the input sequence as the trigger. Similar to Trojaned models for images and graphs, the adversary's objective is to ensure that the Trojaned text model behaves normally for text inputs that do not contain the trigger word or phrase, and produces an adversary-desired output for inputs that contain the trigger word or phrase. The authors of [6] showed that Trojan triggers could be embedded into seq2seq text models that perform text summarization tasks. The presence of a trigger in the input produced a summarized output that had different properties than desired. For e.g., a trigger in the input text caused the output text to have a different'sentiment' than in the case when the trigger was absent (clean input). A metric that is widely used to determine the quality of text summarization is the _ROUGE_ score [33], which compares the output of a text model with a human-written summary. Although the trigger word in a text summary can be identified by brute-force, such a procedure will be computationally expensive. Further, state-of-the-art in Trojan detection for text models focus on text classification tasks [47].
We believe that MDTD can be used together with _ROUGE_ scores to efficiently detect a trigger word in inputs to text models when a user has access only to a small number of clean inputs. This can be accomplished by using adversarial learning methods to estimate a threshold on the number of words removed that results in the maximum change in _ROUGE_ scores. This remains an open problem.
**Black-box Adversarial Learning for DNNs with Graph and Text Inputs:** Multiple black-box adversarial learning techniques have been proposed for DNNs with image inputs [9; 7; 39]. However, black-box adversarial methods for _classification_ tasks on DNN models with text inputs such as those in [14; 41] are not directly applicable to the other tasks including _text summarization_. For DNNs with graph inputs, to the best of our knowledge, the only black-box adversarial learning method was presented in [40]. This approach considered the removal or addition of nodes and edges as a 'noise' term, which does not apply to our framework, since we assume that triggers are embedded into feature values of nodes. Suitably modifying SOTA black-box adversarial learning methods such as HopSkipJump [9] beyond image-based inputs to use MDTD for Trojan input detection remains a nontrivial open problem.
## 8. Related Work
We give an overview of defenses against backdoor attacks on DNN models for image and graph inputs.
**Images:** Defenses against backdoor attacks on DNN models for image-based inputs fall into one of three categories: (i) eliminating the backdoor from the model, (ii) detection mechanisms to identify Trojaned models, and (iii) detecting inputs into which a Trojan has been embedded. Eliminating the backdoor from the DNN model is typically accomplished by pruning the model [34; 5] to remove a Trojan trigger or using a small number of samples to retrain the model [54]. Detection mechanisms to identify Trojaned models involve exhaustively examining a set of models using adversarial learning [17] to reverse engineer a trigger, e.g., Neural Cleanse [55]. The authors of [46] proposed an optimization strategy to overcome the challenge of exhaustively examining the set of DNN models. A GAN-based method to synthesize Trojan triggers was proposed in [66], which reduced the number of samples required in order to detect the trigger. Methods to detect input samples into which a Trojan trigger has been embedded filter out suspicious samples at training or inference time. The authors of [52] proposed spectral signature method which uses singular value decomposition of a covariance matrix associated to sample representations to compute an outlier score. Similar to spectral signature, activation clustering [8] aims to detect Trojan samples by analyzing neural network activations. Unlike MDTD, spectral signature and activation clustering methods are designed for training phase and their goal is to eliminate Trojan samples from the training set. However, they may mistakenly eliminate clean samples, which limits their usability as a Trojan sample detection mechanism in the inference phase. A technique called STRIP was proposed in [15], where DNN outputs were used to distinguish clean from Trojan samples. A DCT-based detector in [64] used frequency analysis to distinguish between clean and Trojan samples. The above methods are either computationally expensive, as shown in [32] or are restricted to image-based inputs. In comparison, MDTD requires limited computational resources and is applicable to a wide variety of input domains.
**Graphs:** Defenses against backdoor attacks on GNNs have been less explored. A smoothed classifier was used in [65] to generate multiple subgraphs by sampling a subset of vertices and edges from the original GNN. A majority-based rule was then used to determine the label associated to each subgraph. An preprocessing step was used to identify nodes of the graph into which an adversary had embedded a Trojan trigger in [58]. This work used the insight that the presence of an edge between two 'dissimilar' nodes was an indication that one of the nodes was Trojaned. Different from these works, MDTD updates features associated to nodes in a GNN whenever nodes and edges of a Trojaned subgraph are altered. One other approach to determine whether a GNN has been Trojaned or not is through the use of an explainability score [25]. This method
uses a small set of clean samples to determine a threshold explanability score; an input to the GNN is then classified as Trojan if its explainability score is greater than this threshold.
## 9. Conclusion
In this paper, we developed a multi-domain Trojan detector (MDTD), which was designed to detect Trojan trigger-embedded input samples in image, graph, and audio-based datasets at testing time. MDTD leveraged an insight that samples containing a trigger were typically located farther away from a decision boundary compared to a clean sample to determine whether a given sample contained a trigger. Qualitatively, we demonstrated this insight by showing that t-SNE visualizations revealed different values of features corresponding to clean and Trojaned samples. Quantitatively, we used adversarial learning techniques to estimate the distance of samples to a decision boundary to infer whether the sample was Trojaned or not. We evaluated MDTD against four state-of-the-art Trojan detection methods on five widely used image datasets- CIFAR100, CIFAR10, GSTRB, SVHN, and Flower102. We also evaluated MDTD on four graph-based datasets- AIDS, WinMal, Toxicant, and COLLAB, and one audio dataset SpeechCommand. In all cases, MDTD effectively identified input samples containing different types of Trojan triggers. We further evaluated MDTD against an adaptive adversary that trains a robust model to increase distance of samples to a decision boundary. In this case, we showed that a reduction in the detection rate of MDTD below 60% is accompanied by a severe reduction in the clean sample classification accuracy of the Trojaned DNN (to \(<50\%\)), making the model unfit for use.
###### Acknowledgements.
This work was supported by the Office of Naval Research via grant N0001-23-1-2386, Air Force Office of Scientific Research via grants FA9550-20-1-0074 and FA9550-23-1-0208, and US National Science Foundation via grant CNS-2153136. We thank the anonymous shepherd for their constructive feedback. We thank Aiswarya Janandmann, Reeya Pimple, and Dinuka Sahabandu from the University of Washington for their help and discussions. We also acknowledge Prof. Andrew Clark from Washington University in St. Louis, Prof. Sukarno Hertoguno from Georgia Tech, and Prof. Sreeram Kannan from University of Washington for insightful discussions.
|
2308.15840 | MSGNN: Multi-scale Spatio-temporal Graph Neural Network for Epidemic
Forecasting | Infectious disease forecasting has been a key focus and proved to be crucial
in controlling epidemic. A recent trend is to develop forecast-ing models based
on graph neural networks (GNNs). However, existing GNN-based methods suffer
from two key limitations: (1) Current models broaden receptive fields by
scaling the depth of GNNs, which is insuffi-cient to preserve the semantics of
long-range connectivity between distant but epidemic related areas. (2)
Previous approaches model epidemics within single spatial scale, while ignoring
the multi-scale epidemic pat-terns derived from different scales. To address
these deficiencies, we devise the Multi-scale Spatio-temporal Graph Neural
Network (MSGNN) based on an innovative multi-scale view. To be specific, in the
proposed MSGNN model, we first devise a novel graph learning module, which
directly captures long-range connectivity from trans-regional epidemic signals
and integrates them into a multi-scale graph. Based on the learned multi-scale
graph, we utilize a newly designed graph convolution module to exploit
multi-scale epidemic patterns. This module allows us to facilitate multi-scale
epidemic modeling by mining both scale-shared and scale-specific pat-terns.
Experimental results on forecasting new cases of COVID-19 in United State
demonstrate the superiority of our method over state-of-arts. Further analyses
and visualization also show that MSGNN offers not only accurate, but also
robust and interpretable forecasting result. | Mingjie Qiu, Zhiyi Tan, Bing-kun Bao | 2023-08-30T08:21:56Z | http://arxiv.org/abs/2308.15840v1 | # MSGNN: Multi-scale Spatio-temporal Graph Neural Network for Epidemic Forecasting
###### Abstract
Infectious disease forecasting has been a key focus and proved to be crucial in controlling epidemic. A recent trend is to develop forecasting models based on graph neural networks (GNNs). However, existing GNN-based methods suffer from two key limitations: (1) Current models broaden receptive fields by scaling the depth of GNNs, which is insufficient to preserve the semantics of long-range connectivity between distant but epidemic related areas. (2) Previous approaches model epidemics within single spatial scale, while ignoring the multi-scale epidemic patterns derived from different scales.
To address these deficiencies, we devise the Multi-scale Spatio-temporal Graph Neural Network (MSGNN) based on an innovative multi-scale view. To be specific, in the proposed MSGNN model, we first devise a novel graph learning module, which directly captures long-range connectivity from trans-regional epidemic signals and integrates them into a multi-scale graph. Based on the learned multi-scale graph, we utilize a newly designed graph convolution module to exploit multi-scale epidemic patterns. This module allows us to facilitate multi-scale epidemic modeling by mining both scale-shared and scale-specific patterns. Experimental results on forecasting new cases of COVID-19 in United State demonstrate the superiority of our method over state-of-arts. Further analyses and visualization also show that MSGNN offers not only accurate, but also robust and interpretable forecasting result.
**Keywords:** Epidemic Forecasting, Graph Neural Networks, Multi-scale Modeling, Graph Structure Learning, Spatio-temporal Forecasting.
## 1 Introduction
The infectious diseases pose a serious hazard to global public health. For decades, epidemic modelers have been struggling in forecasting the spread of emerging infectious diseases, such as the Zika virus, the Ebola virus, and most recently, the COVID-19 virus. To put the virus in control, accurate forecasting of the epidemic is of great significance for both individuals and administrators.
Recently, graph neural networks (GNNs) based methods have emerged as a promising way of combating epidemics. The core idea of these approaches is modeling the epidemic signals between different areas to learn the underlying patterns in historical data. For example, Earlier works (Rodriguez et al, 2021; Kapoor et al, 2020; Gao et al, 2021) view geographic adjacency as the key factor of trans-area epidemic signals, then directly apply graph convolution on a static spatial graph. Some follow-on studies (Ye et al, 2021; Zheng et al, 2021; Panagopoulos et al, 2021; Deng et al, 2020) extend the spatial graphs to spatio-temporal graphs, where the trans-area signals are dynamically measured. Most recently, a technical trend is to incorporate more epidemic-related factors (e.g. the social network data) into trans-area epidemic signals. These factors enrich the existing spatio-temporal graph representations (Fritz et al, 2022; Wang et al, 2022; Xie et al, 2023). Benefiting from the learned epidemic evolution patterns, existing GNN-based methods achieve promising performance in epidemic forecasting task.
Despite their effectiveness, it is noticeable that most of the previous GNN-based methods model epidemics at single spatial scale. We suggest these single-scale based methods fall short in two respects: (1) **Failing to preserve long-range connectivity**. Existing works stack multiple convolution layers to aggregate information step by step, which dilutes the long-range connectivity between distant areas. As shown in the left side of Figure 1, in single scale view, the epidemic transmission path from city A to city B goes through five hops, which means the trans-area epidemic signal may mix up with noise from another four cities. More importantly, the receptive fields of existing GNN-based methods are significantly limited due to _oversmoothing_ issues (Chen et al, 2020). These studies fail to preserve the semantics of long-range connectivity, i.e., the high-order relation between distant but epidemic related areas, leading to limited performance. (2) **Ignoring the multi-scale epidemic patterns**. Previous GNN-based works only exploit epidemic patterns within single spatial scale. However, an important fact has been ignored: the epidemic evolves simultaneously at different scales and reflects multiple epidemic evolution patterns. For example, the fine-grained epidemic evolution depicts local epidemic evolution pattern, while the coarse-grained evolution contains broader regional epidemic pattern. Such multi-scale patterns provide useful
information that can aid accurate epidemic forecasting. Therefore, it is important to extract epidemic patterns from different scales and conduct multi-scale modeling. Unfortunately, none of these works take these multi-scale epidemic patterns into consideration.
To this end, we propose the Multi-scale Spatio-temporal Graph Neural Network (MSGNN) for infectious disease forecasting. The model contains two components to solve the fore-mentioned challenges correspondingly: (1) **Graph Structure Learning Module**. Since the semantic of long-range connectivity is hard to preserve through single-scale modeling, in this module, we construct a multi-scale spatio-temporal graph to deal with different epidemic relations. As shown on the right side of Figure 1, the proposed multi-scale spatio-temporal graph contains two scales. In micro scales, we define the trans-area signal as short-range dependency, which is utilized to express fine-resolution spatial topology. Furthermore, considering the broader spatial effect of macro scale, we view trans-regional signals between states as long-range connectivity, which significantly reduce the epidemic transmission hops between distant areas. All signals are dynamically self-adjusted along with epidemic evolution, forming a multi-scale spatio-temporal graph as output. (2) **Multi-scale Graph Convolution Module**. Considering the discrepancy between different spatial scales, directly conducting inter-scale aggregation is not a viable option. Therefore, we devise a multi-scale information aggregation scheme to distill scale-specific and scale-shared parts from multi-scale patterns. This scheme first applies scale-specific message passing to obtain aggregated features at each scale. Then, the scheme distills the scale-shared epidemic patterns from multi-scale spatio-temporal graph. Finally, a multi-scale fusion block is introduced to integrate multi-scale features based on scale-shared patterns. These fused representations serve as the final output for forecasting.
Figure 1: A Comparison between single-scale based and multi-scale based epidemic spatio-temporal graph. The additional macro scale helps model long-range connectivity, which significantly reduces the hops of epidemic transmission path between distant areas.
To verify the effectiveness of proposed model, we choose the latest and one of the most prevalent infectious disease, the COVID-19 virus as the validation epidemic. We conduct a comprehensive result evaluation on the COVID-19 dataset of the US, which contains two administrative levels, i.e. county level for micro scale and state level for macro scale. The result shows that our model outperforms state-of-art method in terms of both accuracy and robustness. We also organize another discussion part to explore the effect of three spatial dependencies. Our contributions can be summarized as follows:
* We propose an innovative multi-scale forecasting framework called MSGNN, which provides a novel epidemic modeling view that takes multiple administrative levels into account.
* We design a multi-scale graph structure learning module to dynamically model epidemic relations at different scales, then further form a multi-scale spatio-temporal graph for epidemic forecasting.
* We design a multi-scale graph convolution network powered by a novel multi-scale information fusion scheme. This scheme distinguishes the scale-shared and scale-specific patterns, then further encodes them into representation for forecasting.
## 2 Related Work
Many works have put their efforts into epidemic forecasting, which can be generally categorized into three diagrams: compartmental models, time series forecasting methods, and spatio-temporal forecasting methods.
The compartmental model is a classical modeling framework in epidemiology and widely adopted in infectious disease forecasting task (Chang et al, 2021; Shuvo et al, 2020; Kargas et al, 2021; Qian et al, 2020). The main idea of the compartmental model is to divide the population into different compartments such as S (susceptible), I (infectious), and R (recovered), and then model the transition among these compartments with differential equations (He et al, 2020; DUBEY et al, 2013). In response to the COVID-19 outbreak, there emerges plenty of compartmental models specially designed according to COVID-19 virus characteristics. The main advantage of these models can be summarized to their outstanding interpretability (Arik et al, 2020; Lopez and Rodo, 2021), as they are able to produce interpretable coefficients such as infection rate, death rate, etc. However, existing compartmental models are limited to a pre-fixed basic reproduction rate, failing to learn the epidemic evolution patterns at different stages.
Time series forecasting methods reshape epidemic data into time series, turning the epidemic forecasting task into an auto-regressive problem. Common time series forecasting methods include classical statistical methods, hybrid methods and deep learning methods. The widely adopted classical statistical methods of include Auto Regressive Integrated Moving Average (ARIMA)(Maleki et al, 2020; Ceylan, 2020), Multi-Linear Regression (MLR)
(To et al, 2021), etc. The hybrid methods are combinations of classical methods and other methods such as machine learning (Smyl, 2020; Montero-Mango et al, 2020) and boosting trees (Chen and Guestrin, 2016). With the advancement of deep learning, some studies started to introduce neural networks such as the Long-Short Term Memory (LSTM) (Mussumeci and Codego Coelho, 2020; Kara, 2021; Wang et al, 2020) network to model epidemic evolution. Time series forecasting methods are capable of handling complicated temporal dependencies. However, the future epidemic trend of a specific region is not only conditioned on the local spreading, but also influenced by transmissions from other epidemic-related areas. Existing time series forecasting methods ignore the potential spatial dependencies between epidemic-related areas, leading to limited performance.
Recently, graph neural networks have attracted attention because of their strong capability in dealing with non-Euclidean space data (Derr et al, 2020; Jin et al, 2021). With the assistance of GNN, some studies extend the time series forecasting models with spatial interactions, forming a new paradigm called spatio-temporal forecasting (Cao et al, 2020; Wu et al, 2020; Lin et al, 2020; Fang et al, 2020). This paradigm are also applied in traffic forecasting (Guo et al, 2021; Zhao et al, 2020; Yu et al, 2018) and weather forecasting (SHI et al, 2015). Also, existing studies try to leverage spatio-temporal forecasting in epidemic forecasting (Rodriguez et al, 2021; Kim et al, 2020; Jin et al, 2021). Based on time series, these methods are capable of modeling spatial dependencies. However, due to the oversmoothing issue, their aggregation fields are limited within two hops, which means the ignorance of long-range connectivity. Moreover, the influence of multi-scale epidemic patterns are also neglected in existing epidemic modeling works.
To the best of our knowledge, our work is the first spatio-temporal forecasting study that founded on multi-scale epidemic modeling. Based on the multi-scale modeling view, we are capable of handling volatile long-range connectivity and multi-scale epidemic evolution patterns.
## 3 Preliminary
**Epidemic signals.** The epidemic development of a specific region is not only determined by local disease spreading, but also influenced by other related areas. We summarize two kinds of epidemic signals. Firstly, the short-range dependency happens between nearby areas, denoted by trans-area epidemic signal matrix \(A_{c}\); secondly, the underlying long-range connectivity, denoted by trans-regional epidemic signal matrix \(A_{s}\).
**Multi-scale spatio-temporal graph.** Multi-scale spatio-temporal graph is a hierarchical graph containing multiple scales, denoted by \(G=(V,E)\). The node set \(V\) can be further decomposed into \(V=V_{c}\cup V_{s}\), where \(V_{c}\) and \(V_{s}\) respectively represent node set of micro and macro scale. Similarly, the edge set \(E\) can also be divided into \(E=E_{s}\cup E_{c}\) to represent different edges at multiple
scales. Notice both \(V\) and \(E\) are time varying, which is also an important feature of spatio-temporal graph.
**Infectious Disease Forecasting.** Given a region \(i\) and time step \(t\), we define the input series as \(X_{i}(t)\in\mathbb{R}^{L_{b}\times C}\) as \(X_{i}(t)=(x_{i,t-L_{b}+1},x_{i,t-L_{b}+2},...,x_{i,t})\), where \(L_{b}\) is the number of look-back time steps, \(C\) represents the feature dimension of input series, \(x_{i,t}\in\mathbb{R}^{C}\) is the daily feature at time step \(t\). The output series is similar to input series, we denote it as \(\hat{Y}_{i}(t)\in\mathbb{R}^{L_{a}\times D}\), where \(\hat{Y}_{i}(t)=(\hat{y}_{i,t+1},\hat{y}_{i,t+2},...,\hat{y}_{i,t+L_{a}})\), and \(L_{a}\) is the number of look-ahead time steps, \(D\) is the output feature dimension, \(\hat{y}_{i,t+1}\) is the output value of new cases at time step \(t+1\). For simplicity, we omit the time step identifier \(t\) in the rest of paper.
## 4 Methods
As demonstrated in Figure 2, the proposed method is a multi-scale epidemic modeling process, which contains two main components. The pipeline starts with the graph learning module, which takes the state and county level epidemic data as input. The module first conducts temporal convolution to obtain node representation, then long and short range modeling blocks are applied to produce the epidemic signals for both macro and micro scales. Based on the epidemic signals and node representations, we construct a multi-scale spatio-temporal graph. The next part is the multi-scale graph convolution module, which adopts a new multi-scale aggregation scheme. It first applies message passing to embedding scale-specific patterns, forming region and local features. Afterwards, a learner is developed to distill scale-shared patterns through graph node representations. Afterwards, the region and local features are further integrated by a newly designed multi-scale fusion block, which takes these
Figure 2: Pipeline of the proposed MSGNN. The pipeline consists of two main components, i.e. the graph learning module and the multi-scale graph convolution module.
patterns into consideration. Finally, a forecast phase is applied to generate forecasting results.
### Graph Learning Module
The adaptive graph generation module is implemented by three components, including temporal convolution blocks, long-range modeling block and short-range modeling block.
#### 4.1.1 Temporal Convolution Block
The temporal convolution module is designed to extract local temporal features from historical epidemic time series.
For each scale, our goal is to encode the input time series \(X\) into an encoded vector \(H\). Theoretically, the temporal convolution block can be replaced by any recurrent structures such as RNN, GRU, or LSTM. However, limited by fixed kernel size, these naive approaches fail to dynamically encode volatile epidemic dynamics. To this end, we customize a more flexible temporal convolution block based on N-Beats (Oreshkin et al, 2020) to reach our best practice. Differ from the naive N-Beats which only allows single time series as input, we additionally introduce more epidemic related series and external features, as shown in Equation 1.
\[\begin{split} X_{s}&=\text{FC}\left(x_{t-l_{b}:t}^{s},d_{t-l_{b}:t}\right)\oplus\mathcal{I}_{s}\in\mathbb{R}^{N\times L_{b}\times C},\\ X_{c}&=\text{FC}\left(x_{t-l_{b}:t}^{c},d_{t-l_{b}:t }\right)\oplus\mathcal{I}_{c}\in\mathbb{R}^{M\times L_{b}\times C};\end{split} \tag{1}\]
where \(x_{t-l_{b}:t}\) is the input epidemic time series, \(d_{t-l_{b}:t}\) is another time series including weekdays, holidays and date information, \(\mathcal{I}\) is the location identity embedding for each county or state. We first utilize a fully connected layer to integrate date features into original time series, then an augmentation operation is applied to introduce the location identity embedding. This customized procedure allows the block to learn epidemic patterns produced by specified time and locations, which helps boosting the performance. Next, we apply the temporal convolution scheme as follows.
\[\begin{split} H_{s}&=\text{MaxPool}\left[\text{TC} (X_{s})\right]\in\mathbb{R}^{N\times C^{\prime}},\\ H_{c}&=\text{MaxPool}\left[\text{TC}(X_{c})\right] \in\mathbb{R}^{M\times C^{\prime}};\end{split} \tag{2}\]
where \(TC\) is a temporal encoder based on N-Beats. We further conduct a max-pooling operation on the backcast output of temporal encoder to produce \(H_{l}\) as encoded feature.
#### 4.1.2 Long-Range Modeling Block
The long-range modeling block is developed to directly capture the long-range connectivity between distant areas. This block takes the output of macro scale temporal convolution block \(H_{s}\) as input, and produces an adjacency matrix \(A_{s}\) representing the connectivity between different states.
For specified states \(l,k\in V_{s}\), we denote the pairwise connectivity as \([A]_{s}^{l,k}\). To dynamically measure the volatile long-range connectivity, we choose the real-time node representation \(H_{l},H_{k}\) as the key factor, which can be expressed as Equation 3.
\[[A]_{s}^{l,k}=f\left[\theta_{s}^{T}\cdot\text{Concat}(H_{l},H_{k})\right]; \tag{3}\]
where \(\theta_{s}^{T}\in\mathbb{R}^{2\times C^{\prime}}\) is trainable mapping vectors, \(f\) is the activation function. The idea of long-range connectivity modeling is mapping concatenated features into an adjacency edge scalar value. Due to the evolving node representations, the connectivities are also self-adjusted according to the real-time epidemic situation. In this block, the obtained long-range connectivity is determined by epidemic relations among different states. For those distant but epidemic-related areas, these newly introduced long-range connectivities can create a directly connected graph edge, thus breaking the spatial limitations.
#### Short-Range Modeling Block
Aside from the long-range connectivity, we also elaborate on another trans-regional signal called short-range dependency. The short-range dependency are estimated at micro scale, which depicts the fine-resolution geographic topology for county level. Moreover, because of finer resolution, the short-range dependencies may be affected by other factors, such as geographic distance, administration boundary, etc.
For counties \(i,l\) in micro scale, we denote the short-range dependency as \([A]_{c}^{i,l}\). Differ from the long-range connectivity, we introduce extra signals to assist constructing the adjacency matrix, which can be expressed as Equation 4.
\[[A]_{c}^{i,j}=f\left[\theta_{c}^{T}\cdot\text{Concat}(H_{i},H_{j},\mathcal{I}_{ i},\mathcal{I}_{j})\right]+\frac{1}{\sqrt{\delta_{i,j}}}; \tag{4}\]
where \(\theta_{c}^{T}\in\mathbb{R}^{4\times C^{\prime}}\) is a trainable mapping vector, \(f\) is the activation function, \(\mathcal{I}\) is the identity embedding for each location, and \(\delta_{i,j}\) is the geographic distance between county \(i\) and \(j\). Apart from measuring epidemic relation, we additionally add the identity information to introduce the administration boundary information, which means counties in the same state will create stronger short-range dependency. Moreover, since the micro scale features narrower spatial effects, we add the geographic distance factor \(\lambda\) to express the spatial limitations. Finally, to avoid noisy and redundant graph structure, we utilize \(ReLU\) as the activation function to completely shutdown the adjacency edge between irrelevant counties.
### Multi-scale Graph Convolution Module
After the graph learning module, we obtain a new multi-scale spatio-temporal graph where each node represents regions and edges represent the epidemic
signal. To model the epidemic evolution, a natural idea is to use graph convolution network to aggregate information. To this end, we propose the multi-scale graph convolution module. This module is composed by scale-specific message passing blocks, scale-shared patterns learner and a multi-scale fusion block, which forms a novel multi-scale information fusion scheme. In this module, the learned multi-scale spatio-temporal graph serves as the input, and the module outputs an epidemic representation for final forecasting.
#### Scale-specific Message Passing Block
To deal with the scale-specific epidemic patterns, we implement the messaging passing block to deal with the epidemic information within single scale. The block takes the node representation \(H\) and adjacency matrix \(A\) as input. However, it is worth noting that \(A\) is a diagonal-dominant matrix, which means the self-loops are much stronger than other connections. Since self-loops may do harm to the information aggregation between adjacent nodes (Kipf and Welling, 2017), we obtain new adjacency matrix \(\tilde{A}_{s}\in\mathbb{R}^{N\times N}\) and \(\tilde{A}_{c}\in\mathbb{R}^{M\times M}\) by zeroing the diagonal and performing normalization, as shown in Equation 5:
\[\hat{A}=A-\text{diag}\left(A\right),\quad\tilde{A}=\hat{D}^{-\frac{1}{2}}\hat{A }\hat{D}^{-\frac{1}{2}}; \tag{5}\]
where \(\hat{D}\) is the degree matrix of adjacency matrix \(\hat{A}\). Then, we apply graph convolution at micro scale and macro scale respectively, so as to generate the local feature \(H_{c}^{{}^{\prime}}\in\mathbb{R}^{M\times C^{{}^{\prime}}}\) and region feature \(H_{s}^{{}^{\prime}}\in\mathbb{R}^{N\times C^{{}^{\prime}}}\) as Equation 6:
\[\begin{split} H_{c}^{\prime}&=\text{GCN}(\tilde{A}_ {c},H_{c})=\tilde{A}_{c}f\left(\tilde{A}_{c}H_{c}U_{1}\right)U_{2}\in\mathbb{R }^{M\times C^{{}^{\prime}}};\\ H_{s}^{\prime}&=\text{GCN}(\tilde{A}_{s},H_{s})= \tilde{A}_{s}f\left(\tilde{A}_{s}H_{s}U_{3}\right)U_{4}\in\mathbb{R}^{N\times C ^{{}^{\prime}}};\end{split} \tag{6}\]
where \(U_{1},U_{2},U_{3},U_{4}\in\mathbb{R}^{C^{\prime}\times C^{{}^{\prime}}}\) are trainable weight matrix, \(f\) is the activation function. Here we apply a two-layer GCN for both micro scale and macro scale. Note that the aggregation weight matrix for micro and macro scales are different, allowing the model to learn different epidemic patterns specified to certain scales. Through this way, the epidemic information are passed through multi-scale graph, which generates local features at micro scale and region features at macro scale.
#### Scale-shared Pattern Learner
In this block, we utilize the obtained node representation to measure the scale-shared epidemic evolution pattern between state and county levels. Here we first construct a transfer matrix to identify the administrative affiliation between counties and their belonging states, denoted as \(Tran\in\mathbb{R}^{M\times N}\) as Equation 7:
\[\left[Tran\right]_{i,l}=\left\{\begin{array}{l}\frac{1}{\mathcal{N}(l)},\text { if county }i\text{ affiliated to state }l,\\ 0,\text{ else};\end{array}\right. \tag{7}\]
where \(\mathcal{N}(l)\) is the number of counties that belongs to state \(l\). Next, we obtain the multi-scale epidemic patterns from different scales. For macro scale, we directly use the node representation as the epidemic patterns. For micro scale, We aggregate the node representations of counties that belong to the same state, as shown in the Equation 8:
\[H_{c}^{Tran}=(Tran)(H_{c})\in\mathbb{R}^{N\times C^{\prime}} \tag{8}\]
According to the transfer matrix \(Tran\), the produced \(H_{c}^{Tran}\) is an averaged representation from all counties, e.g. the \(l\)-th row of \(H_{c}^{Tran}\) refers to the averaged representation of all counties from state \(l\).
Since transfer representation \(H_{c}^{Tran}\) and state representations \(H_{s}\) are collected at different scales, they have different epidemic evolution patterns. To mine the latent scale-shared patterns, we use the Temporal Attention (Feng et al, 2017) to capture the cross-scale temporal correlations, as shown in Equation 9:
\[E=\left((H_{s})^{T}U_{5}\right)U_{6}\left(H_{c}^{Tran}U_{7}+\beta\right)^{T} \in\mathbb{R}^{N\times N} \tag{9}\]
where \(U_{5},U_{7}\in\mathbb{R}^{N\times C^{\prime}}\) and \(U_{6}\in\mathbb{R}^{N\times N}\) are trainable weight matrix. Because of the dynamic node representations in spatio-temporal graph, the correlation matrix \(E\) also changes along with epidemic evolution.
#### 4.2.3 Multi-scale Fusion Block
To incorporate local and region representations for final forecasting, we develop the multi-scale fusion block to comprehensively consider scale-shared and scale-specific patterns. We first devise a scheme to extract scale-shared epidemic representations from macro scale:
\[\left[E^{\prime}\right]_{i,j} =\frac{\exp\left(\left[E\right]_{i,j}\right)}{\sum_{j\in(0,N)}^{ i\in(0,N)}\exp\left(\left[E\right]_{i,j}\right)}, \tag{10}\] \[Att(H_{s}^{{}^{\prime}}) =E^{\prime}H_{s}^{{}^{\prime}}\]
The scale-shared pattern is extracted by correlation matrix \(E\). Moreover, we further connect the scale-specific pattern \(H_{c}^{\prime}\) to the scale-shared part by Equation 11:
\[H_{out}=\text{Concat}\left[Att(H_{s}^{\prime}),H_{c}^{\prime}\right]\in \mathbb{R}^{2C^{\prime}} \tag{11}\]
### Forecasting Module
The forecasting module is implemented to forecast based on the output of multi-scale graph convolution network.
To generate final forecasting results, we use a weight matrix \(\theta_{f}\in\mathbb{R}^{L_{a}\times 2C^{\prime}}\) to produce the final forecast \(\hat{Y}_{i}\) of county \(i\) with \(L_{a}\) time steps ahead. During
the training, we use the mean squared error of \(\hat{Y}\) and \(Y\) to form the loss function. For ground truth values \(Y=\left\{y_{(t+1)},\ldots,y_{(t+L_{a})}\right\}\)
\[\hat{Y} =\theta_{f}\cdot H_{out}, \tag{12}\] \[loss =MAE(\hat{Y},Y)\] \[=\frac{\sum_{i=1}^{L_{a}}\sum_{j=1}^{N}\lvert\left(\hat{y}_{(t+i)} ^{j}-y_{(t+i)}^{j}\right)\rvert}{L_{a}*N};\]
At the end of the method section, we summarize the algorithm of proposed MSGNN as shown in Algorithm 1.
```
0: The historical epidemic time series for micro scale (e.g. the county level in the US): \(X_{c}=\left\{x_{t-L_{b}+1}^{c},x_{t-L_{b}+2}^{c},\ldots,x_{t}^{c}\right\}\in \mathbb{R}^{M\times L_{b}\times C}\);
0: The historical epidemic time series for macro scale (e.g. the state level in the US): \(X_{s}=\left\{x_{t-L_{b}+1}^{s},\ldots,x_{t}^{s}\right\}\in\mathbb{R}^{N\times L _{b}\times C}\);
1: Get the node representations of micro scale \(H_{c}\in\mathbb{R}^{M\times C^{\prime}}\) from \(X_{c}\) by temporal convolution blocks;
2: Get the node representations of macro scale \(H_{s}\in\mathbb{R}^{N\times C^{\prime}}\) from \(X_{s}\) by temporal convolution blocks;
3: Get trans-area epidemic signal matrix \(A_{c}\in\mathbb{R}^{M\times M}\) from temporal feature \(H_{c}\);
4: Get trans-regional epidemic signal matrix \(A_{s}\in\mathbb{R}^{N\times N}\) from temporal feature \(H_{s}\);
5: Get region feature \(H_{s}^{\prime}\) by applying message passing on graph \((A_{s},H_{s})\);
6: Get local feature \(H_{c}^{\prime}\) by applying message passing on graph \((A_{c},H_{c})\);
7: Get the combined feature \(H_{out}\) from \(H_{c}^{\prime},H_{s}^{\prime},E\) by multi-scale fusion block;
8: Get forecasting Output of MSGNN based on \(H_{out}\);
9:return Output;
10: Calculate the loss of MSGNN.
```
**Algorithm 1** The MSGNN model for epidemic forecasting.
## 5 Experiment
To validate the infectious disease modeling ability, we evaluate the proposed model by forecasting the COVID-19 epidemic, the latest and one of the most prevalent infectious diseases for decades. We elaborate the experiments mainly on the COVID-19 Forecast Hub 1 (hereon referred as _The Hub_), an official global challenge raised by United State Centers for Disease Control and Prevention (CDC) to accommodate weekly epidemic forecasting results from international groups. Since _The Hub_ released the forecasting results of all submitted models, we can easily get access to the performance of competitors and make comparison.
### Experimental Settings
#### 5.1.1 Data
For COVID-19 epidemic data, we leverage daily reported cases and deaths from the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE) as the gold-standard data 2. The dataset collects incident cases and deaths from both state and county level in the United State. Following the location set defined by _The Hub_, our experiment is conducted on 50 states and 3142 subordinated counties. The dataset date range is set from March 1, 2020 to July 1, 2021. The detailed statics are shown in Table 1.
Footnote 2: [https://github.com/CSSEGISandData/COVID-19](https://github.com/CSSEGISandData/COVID-19)
For geographical data, we directly use the geographic information in the JHU CSSE dataset, which contains the latitude and longitude of all regions in the US. The geographic data provides the state adjacency and county adjacency relationship information in the United State.
For population data, we use the state and county population information (2019) in the United State to perform normalization to both confirmed and deaths data.
#### 5.1.2 Evaluation Metrics
As to evaluation metrics, we keep accordance with the requirements on _The Hub_. Since the daily reported cases may be rather vibrating and unstable, we leverage weekly reported confirmed cases as our primary forecasting target. Our evaluation range is from February 2021 to July 2021. On each Sunday, we generate a forecasting result including one week, two weeks and three weeks ahead. In the five months evaluation period, we provides tens of forecasting results and calculating the performance indicators using three metrics, i.e. mean average error (MAE), mean average percent error (MAPE) and root mean square error (RMSE), which can be calculated as Equation 13:
\begin{table}
\begin{tabular}{l|c|c c c c c} \hline \hline & \multicolumn{2}{c}{\# location} & \multicolumn{1}{c}{\# dates} & \multicolumn{1}{c}{Min} & \multicolumn{1}{c}{Max} & \multicolumn{1}{c}{Ave} \\ \hline \multirow{2}{*}{US-State} & confirmed & 50 & 487 & 0 & 73854 & 1660 \\ & deaths & 50 & 487 & 0 & 4417 & 450 \\ \hline \multirow{2}{*}{US-County} & confirmed & 3142 & 487 & 0 & 34497 & 62 \\ & deaths & 3142 & 487 & 0 & 761 & 15 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Detailed statics of the JHU CSSE dataset.
\[MAE =\frac{1}{n}\sum_{i=1}^{n}\lvert\hat{y}_{i}-y_{i}\rvert,\] \[MAPE =\frac{100\%}{n}\sum_{i=1}^{n}\lvert\frac{\hat{y}_{i}-y_{i}}{y_{i }}\rvert, \tag{13}\] \[RMSE =\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_{i}-y_{i}\right)^{2}};\]
where \(\hat{y}\) is the forecasting output, \(y\) is the ground truth value, \(n\) is the location number. As shown in Equation 13, the MAE measures the absolute difference between two values, averaged by the location number. The MAPE is also an error measure of two values in percentage terms. The RMSE is another frequently used metric, it represents the square root of the differences between predicted values and observed values. All the three metrics range in \([0,+\infty]\) and smaller values are better.
For the whole evaluation period with over ten weeks, we conduct two experiments. We first average these per-week metrics to produce the main evaluation results containing MAE, MAPE, and RMSE. Moreover, we design another experiment called per-week evaluation to help understand the statistical distribution of forecasting results, so as to test the robustness of models. Besides, to keep consistency with the evaluation standard on _The Hub3_, we calculate the three metrics on the most 500 populous counties in the US (@500), i.e. \(n=500\) in Equation 13. Furthermore, we utilize another relatively small dataset consisting of the most 100 populous counties (@100), i.e. \(n=100\) in Equation 13 to comprehensively evaluate the performances of all models.
Footnote 3: [https://covid19forecasthub.org/eval-reports/#Incident_Case_Forecasts_](https://covid19forecasthub.org/eval-reports/#Incident_Case_Forecasts_)(county)
### Baselines
To serve as the baselines, we choose a broad range of competitive COVID-19 forecasting models reported on _The Hub_. It is worth noting that most of the models on _The Hub_ do not have their codes released, so instead of reproducing their forecasting results, we directly take their submitting result on the website. Furthermore, considering the frequency of submission and target forecasting location sets are varied, we set the filtering criteria as follows: candidate models should produce complete weekly outputs ranging from May, 2020 to July, 2021, also, the model must provide epidemic forecasting output of more than 3000 counties for full evaluation. We select 10 models satisfying the requirements as follows:
* **Microsoft-DeepSTIA**(Zheng et al, 2021): A deep spatio-temporal network, proposed by Microsoft Research.
* **USC-SI_kJalpha**(Srivastava et al, 2020): A SIR model with vaccines and multiple variants, proposed by University of South California.
* **UVA-Ensemble**(Adiga et al, 2021): An ensemble model containing auto-regressive approach, SEIR approach and machine learning approach, proposed by University of Virginia.
* **Google_Harvard-CPF**(Arik et al, 2020): A SEIR model fitted by machine learning approach, proposed by Google Cloud AI.
* **CEID_Walk4**: A random walk model without drift, proposed by University of Georgia. Footnote 4: [https://github.com/e3bo/random-walks](https://github.com/e3bo/random-walks)
* **IowaStateLW-STEM**(Wang et al, 2020): A quasi-likelihood approach via the penalized spline approximation, proposed by Iowa State University.
* **JHUAPL-Bucky**(Panaggio et al, 2022): A spatial compartment model using public mobility data, proposed by Johns Hopkins University.
* **CU-nochange**(Pei and Shaman, 2020): A metapopulation county-level SEIR model, proposed by Columbia University.
* **COVIDhub-Ensemble**(Ray et al, 2020): An ensemble model containing the best performed models on _The Hub_, proposed by the CDC.
Due to the limit of space, more detailed information of compared models can be found at COVID-19 Forecast Hub official website5.
Footnote 5: [https://zolardata.com/project/44](https://zolardata.com/project/44)
### Implementation Details
All of our implementations run on a server with an Intel Xeon Platinum 8369B @2.9GHz CPU and a single NVIDIA RTX 2070 Super GPU. To eliminate the data bias originated by demographic differences, we normalize all the data by population factor, encouraging the model to learn the common epidemic patterns sharing across different administrative scales. We set our model to look back 14 days, i.e. we begin training at the beginning of each week with features of last week, producing the inferences for the next week. Moreover, to allow an early stop, we take the latest data of all locations as the validation set. Finally, for the robustness of our proposed model, we utilize different random seeds for training, then averaged the final forecasting result from different seeds. For the N-Beats in temporal convolution module, we set its dimension as 32. As for the graph convolution module, we set the network dimension as 64 with 2 layers, the aggregation type is Max-Pooling, and the node and edge dimensions are 4. The batch size is set as 4 and the learning rate is set as 1e-3. For the graph optimization, we utilize the mini-batch training techniques, the batches are sampled from the whole graph in a certain time period, forming a new sub-graph. In practice, we utilize the random-walk sampling techniques to form the new sub-graphs. Moreover, all the sampled dates are randomly shuffled to avoid information leakage.
### Main Evaluation Results
The main evaluation result is reported in Table 2, with best results highlighted in bold and second best highlighted in underline. The evaluation is divided into
two datasets, i.e. @500 and @100. For each dataset, we set three forecasting horizons containing one week ahead, two weeks ahead and three weeks ahead, then further calculating their evaluation metrics containing MAE, MAPE and RMSE. All metrics are computed on the US county level, then further averaged to obtain main evaluation results. The same result processing strategies are also applied to other baselines.
As shown in Table 2, our model has superior performance over any other baselines. In most cases, our proposed model produces the best results. For
\begin{table}
\begin{tabular}{l|c|c c c|c c c} \hline Dataset & & \multicolumn{3}{c|}{@500} & \multicolumn{3}{c}{@100} \\ \hline Method & Metric & 1wk & 2wk & 3wk & 1wk & 2wk & 3wk \\ \hline \multirow{3}{*}{Microsoft-DeepSTIA} & MAE & 139.0 & 506.7 & 988.4 & 374.5 & 1414.4 & 2628.7 \\ & MAPE & 0.425 & 0.455 & 0.607 & 0.392 & 0.461 & 0.604 \\ & RMSE & 341.3 & 980.0 & 1808.5 & 677.2 & 1992.7 & 3911.9 \\ \hline \multirow{3}{*}{USC-SI\_kJalpha} & MAE & 141.1 & 541.6 & 1045.5 & 343.1 & 1440.3 & 2781.6 \\ & MAPE & 0.430 & 0.546 & 0.733 & 0.369 & 0.535 & 0.732 \\ & RMSE & 366.0 & 1072.7 & 2015.6 & 650.1 & 2081.0 & 3935.7 \\ \hline \multirow{3}{*}{UVA-Ensemble} & MAE & 159.7 & 587.6 & 1071.6 & 411.3 & 1570.6 & 2866.9 \\ & MAPE & 0.451 & 0.591 & 0.730 & 0.443 & 0.598 & 0.739 \\ & RMSE & 418.1 & 1073.6 & 1873.3 & 822.1 & 2179.2 & 3842.6 \\ \hline \multirow{3}{*}{Google\_Harvard-CPF} & MAE & 178.2 & 685.7 & 1211.8 & 465.6 & 1917.1 & 3346.8 \\ & MAPE & 0.456 & 0.627 & 0.738 & 0.368 & 0.623 & 0.767 \\ & RMSE & 408.1 & 1352.7 & 2283.2 & 777.9 & 2794.2 & 4718.0 \\ \hline \multirow{3}{*}{CEID\_Walk} & MAE & 155.4 & **485.1** & 969.2 & 409.3 & 1390.2 & 2594.6 \\ & MAPE & 0.492 & 0.443 & 0.593 & 0.438 & 0.447 & 0.573 \\ & RMSE & 371.8 & 1021.5 & **1724.7** & 742.6 & 2016.2 & 3931.2 \\ \hline \multirow{3}{*}{IowaStateLW-STEM} & MAE & 321.5 & 694.4 & 1240.5 & 929.0 & 1870.2 & 3316.7 \\ & MAPE & 0.729 & 0.550 & 0.644 & 0.654 & 0.554 & 0.669 \\ & RMSE & 1091.4 & 1481.9 & 2284.9 & 2312.3 & 3054.7 & 4684.4 \\ \hline \multirow{3}{*}{JHUAPL-Bucky} & MAE & 210.5 & 582.1 & 1067.3 & 543.1 & 1501.9 & 2813.2 \\ & MAPE & 0.550 & 0.614 & 0.746 & 0.458 & 0.542 & 0.715 \\ & RMSE & 530.8 & 1220.1 & 2012.1 & 972.7 & 2154.2 & 3844.2 \\ \hline \multirow{3}{*}{CU-nochange} & MAE & 135.1 & 577.3 & 1080.3 & 352.5 & 1585.7 & 2590.7 \\ & MAPE & 0.347 & 0.544 & 0.710 & 0.285 & 0.543 & 0.585 \\ & RMSE & 327.7 & 1119.2 & 2006.1 & 643.4 & 2304.4 & 4144.3 \\ \hline \multirow{3}{*}{COVIDhub-ensemble} & MAE & 132.2 & 550.2 & 1040.5 & 343.8 & 1482.2 & 2808.6 \\ & MAPE & 0.400 & 0.491 & 0.650 & 0.339 & 0.481 & 0.645 \\ & RMSE & 312.3 & 1035.9 & 1878.5 & 599.5 & 2108.6 & **3822.6** \\ \hline \multirow{3}{*}{MSGNN (Ours)} & MAE & **121.3** & 502.2 & **959.6** & **321.5** & **1360.4** & **2588.5** \\ & MAPE & **0.340** & **0.439** & **0.584** & **0.283** & **0.432** & **0.571** \\ \cline{1-1} & RMSE & **302.2** & **977.8** & 1867.8 & **594.8** & **1990.5** & 3840.9 \\ \hline \end{tabular}
\end{table}
Table 2: A summary of main evaluation results. Best results are highlighted in bold and second best results are highlighted by underlines.
example, when we set the forecasting horizon as one week ahead, our proposed model achieved the lowest MAE, MAPE and RMSE in terms of both @500 and @100 datasets, the best performed baseline forecasting MAE is 132.2 and our proposed model reaches 121.3, bringing in up to 10% of the performance gain. Moreover, when the forecasting horizon is set as two weeks and more, the proposed model still manages to outperforms the state-of-art methods in terms of MAPE and MAE. Despite the fact that in some cases other baselines achieve better result, our model still manages to achieve the second best score.
It is also worth noting that for all models, the forecasting accuracy drops when the forecasting horizon becomes longer. For example, the CU-nochange model reaches 0.347 and 135.1 for MAPE and MAE, which is a pretty competitive result in 1-week forecasting scenario. However, when broadening the forecasting horizon, its performance drops sharply to 0.710 and 1080.3. Similarly, the USC-SI_kJalpha model outperforms most of the baselines in short term forecasting, while producing much poorer results in long term forecasting. Taking all forecasting horizons into consideration, we can observe that the proposed MSGNN still has strong performance. Only for the forecasting in next three weeks, the RMSE metric of MSGNN is relatively larger than other baselines, such as CEID_Walk and COVIDhub-ensemble. While for all other cases, the proposed model is the best performing model in terms of averaged accuracy.
### Per-week Evaluation Results
Aside from inspecting the main evaluation result, we also notice significant variations in performance across different weeks. Some baseline models may suffer a great performance degradation when facing challenging epidemic patterns, such as sharp increments in confirmed cases, unstable mortality curves, etc. Since these challenging epidemic patterns are widespread in epidemic outbreaks, the sensitivity to these patterns may exert a dramatic negative impact on model robustness.
To this end, we designed another experiment called per-week evaluation experiment. In this experiment, we collect forecasting error metrics for each model every week from February 2021 to July 2021, and further visualize their distributions to help validate robustness of both proposed model and its competitors.
As illustrated in Figure 3, the per-week evaluation results of the proposed model is still competitive in terms of MAE, MAPE and RMSE metrics. Our proposed model outperforms other baselines in three perspectives. Firstly, the median value of the proposed model is low. Compared with CEID_Walk, one of best performed baselines in averaged evaluation experiments, our proposed model reached significantly lower error medians in terms of MAE, MAPE and RMSE. Secondly, the proposed model produces much smaller performance variations. Other competitive models such as Microsoft-DeepSTIA and COVIDhub-ensemble feature larger variations in performance, which means their error distributions are scattered. On the contrary, the proposed model
Figure 3: The per-week evaluation results for model robustness experiments, all the results are presented as box plots.
has smaller rectangle size and produces stabler forecasting results compared to other competitors. Finally, MSGNN produces less outliers. The hollow diamonds represent the outliers, whose statistic values are extremely high or extremely low. More outliers will do harm to model's forecasting robustness, leading to lower reliability. Combining the box plots of MAE, MAPE and RMSE, we observe that the total number of outliers produced by MSGNN is the least among all the compared baseline models.
All the observations prove that when facing challenging epidemic patterns, the proposed model performs more robust than all the baseline models. In summary, the proposed model is the most competitive model producing not only accurate, but also robust epidemic forecasting result.
### Ablation Study
To verify the effectiveness of each component in proposed model, we conduct a comprehensive ablation study, whose result is shown in Table 3. All the experiments are conducted on both @500 and @100 dataset and the results are obtained by averaging the outputs of forecasting 1-week ahead, 2-weeks ahead and 3-weeks ahead. The original version of our proposed model is MSGNN. To further prove the effectiveness of our model, we add another two best performed baselines, i.e. CEID_Walk and COVIDhub-ensemble for comparison.
#### 5.6.1 Effect of the multi-scale modeling
To evaluate the contribution of the multi-scale structure, we create the first variant:
**MSGNN-w/o-ms.** MSGNN-w/o-ms is a variant which models epidemic only in county level. To achieve so, we completely remove the additional state level in proposed model, downgrading it to a single-scale spatio-temporal model. Under this circumstance, the model is only accessible to the county level epidemic patterns, thus inevitably leading to worse performance. It can be observed in Table 3 that MSGNN-w/o-ms variant produces the worst result among all baselines and variants on both @500 and @100 datasets, which indicates the importance of multi-scale modeling in epidemic forecasting. The reasons why the multi-scale modeling is important could be that without the assistance of information from upper administrative level, the model can only attend to their closest neighbours for information aggregations, ignoring the latent long-range connectivity.
#### 5.6.2 Effect of graph learning module
To understand the role that graph learning module plays in epidemic modeling, we design two variants excluding the graph learning procedures.
**MSGNN-w-GCN.** In this variant, we use graph convolution network to replace the graph learning module, i.e. we exclude the dynamic multi-scale spatio-temporal graph and directly use the static geographical distance graph to get the forecasting result, thus forming a naive graph convolution network
(Kipf and Welling, 2017b). According to the experiment result, the variant performs obviously worse than the original model, which indicates the the dynamic spatio-temporal graph contributes positively to epidemic modeling, enabling the model to dynamically attend to the most relevant regions. Moreover, when applying the naive graph convolution network, the model fails to capture those time-varying dependencies. This is caused by static geographic information serves as the dominant element in spatio-temporal modeling instead of dynamic epidemic interactions.
**MSGNN-w-GAT.** Aside from utilizing a static graph, we also develop a variant using graph attention mechanism to construct spatio-temporal graphs. In this variant, we replace the adaptive generation module completely with graph attention network (Velickovic et al, 2018). As shown in the experiment result, our model outperforms this variant, which proves the effectiveness of self-adaptive adjustment mechanism. Although the graph attention mechanism is capable of capturing part of the dynamic spatial dependencies, however, it fails to take the latent long range spatial dependencies into consideration, leading to limited performance.
The two variants mentioned above prove the effectiveness of the proposed adaptive graph generation module. With the specifically designed graph learning module, MSGNN can balance both temporal signals and spatial signals. Moreover, the self-adaptive adjustment mechanism assists model distinguishing the most epidemic relevant features, avoiding introducing irrelevant noises.
#### Effect of multi-scale graph convolution module
**MSGNN-w/o-fusion.** In this variant, we remove the multi-scale fusion blocks and directly concatenate local and region features to form the final representations for forecasting. Therefore, this variant is not accessible to scale-specific and scale-shared epidemic patterns. Due to the missing multi-scale epidemic patterns, the performance of this variant is limited compared to the original version. However, it is important to note that although the variant achieves inferior performance over the proposed model, it still outperforms the
\begin{table}
\begin{tabular}{l|c c|c c} \hline Dataset & \multicolumn{2}{c|}{@100} & \multicolumn{2}{c}{@500} \\ \hline Metric & MAPE & \begin{tabular}{c} Relative \\ Increments \\ \end{tabular} & MAPE &
\begin{tabular}{c} Relative \\ Increments \\ \end{tabular} \\ \hline CEID-Walk & 0.479 & +5.1\% & 0.509 & +4.9\% \\ COVIDhub-ensemble & 0.489 & +6.1\% & 0.514 & +6\% \\ \hline
**MSGNN** & **0.428** & **-** & **0.454** & **-** \\ MSGNN-w/o-ms & 0.509 & +8.1\% & 0.515 & +6.1\% \\ MSGNN-w-GCN & 0.470 & +4.2\% & 0.491 & +3.7\% \\ MSGNN-w-GAT & 0.479 & +5.1\% & 0.489 & +3.5\% \\ MSGNN-w/o-fusion & 0.467 & +3.9\% & 0.475 & +2.1\% \\ \hline \end{tabular}
\end{table}
Table 3: A summary of ablation study results, based on @100 and @500 datasets, averaged by 1-week, 2-weeks, 3-weeks ahead forecasting MAPE.
state-of-art baselines including CEID_Walk and COVIDhub-ensemble, indicating the multi-scale spatio-temporal graph helps modeling the epidemics better.
### Case Study
To intuitively understand the evaluation experiments mentioned above, we specifically select four US counties with the largest population to visualize the case study, i.e. Los Angeles County of California, Cook County of Illinois, Harris County of Texas and Maricopa County of Arizona, which are distributed in different parts of the United State. We compare their true incident cases with the forecasting results produced by MSGNN and one of the best performed baseline COVIDhub-ensemble. It is noticeable that COVIDhub-ensemble is an embedding model officially proposed by United State Centers for Disease Control and Prevention (CDC), which comprehensively takes all of the forecasting result from other teams into consideration.
The result is shown in Figure 4. We can observe that our model still outperforms the COVIDhub-ensemble in all the counties for comparison, producing more accurate and stable forecasting results. In Los Angeles county and Harris county, the trend of ground truth is single and obvious, and the forecasting curve produced by MSGNN fits the ground truth curves better, which means the proposed model achieves better accuracy than COVIDhub-ensemble model. While in the Harris county and Maricopa county, the epidemic trend features more complicated patterns, e.g. larger slope of curves, more slope
Figure 4: Case studies for four counties with the largest population within a fifteen weeks evaluation period.
turning corners, etc. These complex patterns challenge the models' forecasting robustness. It is obvious that the COVIDhub-ensemble model performs poorly with these patterns, where the produced curves are unstable and keep vibrating. Although the proposed MSGNN is also affected by these patterns, it still manages to fit the ground truth in most of the evaluation dates, showing stronger robustness than its competitors. Generally, the overall performance of MSGNN wins more cases than the COVIDhub-ensemble model does.
## 6 Discussion: Interpretability Study
To fully understand the interpretability of our proposed model, in this section, we take a deeper inspect on trans-area and trans-regional epidemic signals produced by graph learning module. Due to the coupled model structure, previous GNN-based methods fail to study model interpretability, which harms forecasting result's credibility. On the contrary, the proposed MSGNN is decoupled into two modules. We can easily study the model's interpretability by inspecting the intermediate output. As the epidemic signals fully control all message passing in the following module, we select the trans-regional and trans-area epidemic signals (i.e. the output of graph learning module), and study the internal relationship between epidemic evolution and these signals.
We visualize the evolution of epidemic signals alongside national incident numbers. Since our customized graph learning module can dynamically adjust the graph edges based on current epidemic situations, so we directly measures the strength of dependencies through edge weights, where larger edge weights correspond to more active signals. We record all the edge weights produced by a fully trained MSGNN for each forecasting date and present the results in two line charts. as shown in Figure 5.
Firstly, for long-range connectivity, we can observe that the signals tend to become activated before next wave of sharply increasing new cases. For example, the first peak of activated signals appears in March 2020, which is followed by a rapidly increasing wave of incident cases in April 2020. This kind of pattern is not unique and recur in May 2020, June 2020, October 2020 and February 2021. By aligning the evolution of long-range connectivity with national incident curve, we suggest this interesting pattern reflects the potential transmission process between distant regions. The more active trans-regional epidemic signals indicate the more benefits of spatial signals in epidemic forecasting, which also implies more distant transmissions in the physical world.
Secondly, it is obvious that the active period of short-range dependency almost covers the peak of long-range connectivity. Especially since January 2021, the trans-area signals have much more distinct peak values than long-range connectivity. We conjectured that as the pandemic is gradually controlled in 2021, the large scale nation-wise transmissions were successfully suppressed, leading to inactivated long-range connectivity signals. However,
the local epidemic transmissions still exist and forms active short-range epidemic signals. Hence, we suggest the short-range dependency indicates the local transmission process among neighbouring areas.
In summary, the proposed MSGNN has some interpretable capacities as it explicitly shows how epidemic transmit at different spatial scales. On one hand, the learned epidemic signals help modeling epidemic better and ensure the credibility of forecasting result. On the other hand, it also installs confidence in end-users such as policy makers and citizens.
## 7 Conclusions
In this paper, we propose Multi-scale Spatio-temporal Graph Neural Network (MSGNN) for epidemic forecasting. Specifically, we design a novel multi-scale epidemic modeling method. On one hand, we devise a graph learning module to directly captures long-range connectivity and integrate them into a multi-scale spatio-temporal graph. On the other hand, we customized a multi-scale graph convolution module, which adopts a novel information aggregation scheme. The scheme allows us to distinguish both scale-specific and scale-shared epidemic patterns, which gives rise to multi-scale modeling. Extensive experiments are conducted and the results demonstrate the capacity of MSGNN
Figure 5: The evolution of spatial dependencies produced by MSGNN (plotted as colored and filled lines) alongside with weekly national incident number (plotted as dark line).
from multiple dimensions, including the forecasting accuracy, robustness and forecasting interpretability.
## Declarations
Funding.This work was supported by National Key Research and Development Project (No.2020AAA0106200), the National Nature Science Foundation of China under Grants (No.61936005, 61872424), the Natural Science Foundation of Jiangsu Province (Grants No.BK20200037 and BK20210595).
Conflict of interests.The authors have no conflict of interest to declare that are relevant to the content of this article.
|
2308.06182 | Noise-Resilient Designs for Optical Neural Networks | All analog signal processing is fundamentally subject to noise, and this is
also the case in modern implementations of Optical Neural Networks (ONNs).
Therefore, to mitigate noise in ONNs, we propose two designs that are
constructed from a given, possibly trained, Neural Network (NN) that one wishes
to implement. Both designs have the capability that the resulting ONNs gives
outputs close to the desired NN.
To establish the latter, we analyze the designs mathematically. Specifically,
we investigate a probabilistic framework for the first design that establishes
that the design is correct, i.e., for any feed-forward NN with Lipschitz
continuous activation functions, an ONN can be constructed that produces output
arbitrarily close to the original. ONNs constructed with the first design thus
also inherit the universal approximation property of NNs. For the second
design, we restrict the analysis to NNs with linear activation functions and
characterize the ONNs' output distribution using exact formulas.
Finally, we report on numerical experiments with LeNet ONNs that give insight
into the number of components required in these designs for certain accuracy
gains. We specifically study the effect of noise as a function of the depth of
an ONN. The results indicate that in practice, adding just a few components in
the manner of the first or the second design can already be expected to
increase the accuracy of ONNs considerably. | Gianluca Kosmella, Ripalta Stabile, Jaron Sanders | 2023-08-11T15:11:57Z | http://arxiv.org/abs/2308.06182v1 | # Noise-Resilient Designs for Optical Neural Networks
###### Abstract
All analog signal processing is fundamentally subject to noise, and this is also the case in modern implementations of Optical Neural Networks (ONNs). Therefore, to mitigate noise in ONNs, we propose two designs that are constructed from a given, possibly trained, Neural Network (NN) that one wishes to implement. Both designs have the capability that the resulting ONNs gives outputs close to the desired NN.
To establish the latter, we analyze the designs mathematically. Specifically, we investigate a probabilistic framework for the first design that establishes that the design is correct, i.e., for any feed-forward NN with Lipschitz continuous activation functions, an ONN can be constructed that produces output arbitrarily close to the original. ONNs constructed with the first design thus also inherit the universal approximation property of NNs. For the second design, we restrict the analysis to NNs with linear activation functions and characterize the ONNs' output distribution using exact formulas.
Finally, we report on numerical experiments with LeNet ONNs that give insight into the number of components required in these designs for certain accuracy gains. We specifically study the effect of noise as a function of the depth of an ONN. The results indicate that in practice, adding just a few components in the manner of the first or the second design can already be expected to increase the accuracy of ONNs considerably.
keywords: Optical Neural Networks, Law of Large Numbers, Universal Approximation +
Footnote †: journal: Journal of LaTeX Templates
## 1 Introduction
Machine Learning (ML) is a computing paradigm in which problems that are traditionally challenging for programmers to explicitly write algorithms for, are solved by learning algorithms that improve automatically through experience. That is, they "learn" structure in data. Prominent examples include image recognition [1], semantic segmentation [2], human-level control in video games [3], visual tracking [4], and language translation [5].
Classical computers are designed and best suited for serialized operations (they have a central processing unit and separated memory), while the data-driven ML approach requires decentralized and parallel calculations at high bandwidth as well as continuous processing of parallel data. To illustrate how ML can benefit from a different architecture, we can consider performance relative to the number of executed operations, also indicated as Multiply--Accumulate Operation (MAC) rates, and the energy efficiency, i.e., the amount of energy spent to execute one single operation. Computational efficiency in classical computers levels off below \(10\) GMAC/s/W [6].
An alternative computing architecture with a more distributed interconnectivity and memory would allow for greater energy efficiency and computational speed. An inspiring example would be an architecture such as the brain. The brain is able to perform about \(10^{18}\) MAC/s using only \(20\,\mathrm{W}\) of power [6], and operates approximately \(10^{11}\) neurons with an average number of inputs for each of about \(10^{4}\) synapses. This leads to an estimated total of \(10^{15}\) synaptic connections, all conveying signals up to \(1\,\mathrm{kHz}\) bandwidth. The brain's computational efficiency (being less than \(1\,\mathrm{aJ}\) per MAC) is then about \(8\) orders of magnitude higher than the one of current supercomputers, which operate instead at \(100\,\mathrm{pJ}\) per MAC [6].
Connecting software to hardware through computing architecture tailored to ML tasks is the endeavor of research within the field of neuromorphic computing. The electronics community is now busy developing non-von Neumann computing architectures to enable information processing with an energy efficiency down to a few pJ per operation. Aiming to replicate fundamentals of biological neural circuits in dedicated hardware, important advances have been made in neuromorphic accelerators [7]. These advances are based on the spiking architectural models, which are still not fully understood. Deep Learning (DL)-focused approaches, on the other hand, aim to construct hardware that efficiently realizes DL architectures, while eliminating as much of the complexity of biological neural networks as possible. Among the most powerful DL hardware we can name the GPU-based DL accelerators hardware [8; 9; 10; 11; 12], as well as emerging analogue electronic Artificial Intelligence chipsets that tend to collocate processing and memory to minimize the memory-processor communication energy costs (e.g. the analogue
crossbar approaches [13]). The Mythic's architecture, for example, can yield high accuracy in inference applications within a remarkable energy efficiency of just half a pJ per MAC. Even if the implementation of neuromorphic approaches is visibly bringing outstanding record energy efficiencies and computation speeds, neuromorphic electronics is already struggling to offer the desired data throughput at the neuron level. Neuromorphic processing for high-bandwidth applications requires GHz operation per neuron, which calls for a fundamentally different technology approach.
### Optical Neural Networks
A major concern with neuromorphic electronics is that the distributed hardware needed for parallel interconnections is impractical to realize with classical metal wiring: a trade-off applies between interconnectivity and bandwidth, limiting these engine's utilization to applications in the kHz and sub-GHz regime. When sending information not through electrical signals but via optical signals, the optical interconnections do not undergo interference and the optical bandwidth is virtually unlimited. This can for example be achieved when exploiting the color and/or the space and/or the polarization and/or the time domain, thus allowing for applications in the GHz regime. It has been theorized that photonic neuromorphic processors could operate ten thousand times faster while using less energy per computation [14; 15; 16; 17]. Photonics therefore seems to be a promising platform for advances in neuromorphic computing.
Implementations of weighted addition for Optical Neural Networks (ONNs) include Mach-Zehnder Interferometer-based Optical Interference Units [18], time-multiplexed and, coherent detection [19], free space systems using spatial light modulators [20] and Micro-Ring-Resonator-based weighting bank on silicone [21]. Furthermore, Indium phosphide-integrated optical cross-connect using Semiconductor Optical Amplifiers as single stage weight elements, as well as Semiconductor Optical Amplifier-based wavelength converters [22; 23; 24] have been demonstrated for allowing All-Optical (AO) Neural Networks (NNs). A comprehensive review of all the approaches used in integrated photonics can be found in [25].
Next to these promises, aspects like implementation of nonlinearities, access and storage of weights in on-chip memory, and noise sources in analog photonic implementations, all pose challenges in devising scalable photonic neuromorphic processors and accelerators. These challenges also occur when they are embedded within end-to-end systems. Fortunately, arbitrary scalability of these networks has been demonstrated, with a certain noise and accuracy. However, it would be useful to envision new architectures to reduce noise even more.
### Noise in ONNs
The types of noise in ONNs include thermal crosstalk [26], cumulative noise in optical communication links [27; 28] and noise deriving from applying an activation function [29].
In all these studies, the noise is considered to be approximated well by Additive White Gaussian Noise (AWGN).
For example, taking the studies [26; 28; 27; 29; 30] as starting point, the authors of [31] model an ONN as a communication channel with AWGN. We follow this assumption and will model an ONN as having been built up from interconnected nodes with noise in between them. This generic approach does not restrict us to any specific device that may be used in practice.
The model also applies to the two alternative designs of an AO implementation of a NN (see for example [32]) and the case of an optical/electrical/optical (O/E/O) NN [22]. In an AO NN, the activation function is applied by manipulating an incoming electromagnetic wave. Modulation (and the AWGN it causes) only occurs prior to entering an AO NN (or equivalently, in the first layer). For the remainder of the network the signal remains in the optical domain. Here, when applying the optical activation function a new source of noise is introduced as AWGN at the end of each layer. Using the O/E/O network architecture, the weighted addition is performed in the optical realm, but the light is captured soon after each layer, where it is converted into an electrical and digital signal and the activation function is applied via software on a computer. The operation on the computer can be assumed to be noiseless. However, since the result again needs to be modulated (to be able to act as input to the next layer), modulation noise is added. We can further abstract from the specifics of the AO and O/E/O design and see that in either implementation noise occurs at the same locations within the mathematical modeling, namely AWGN for weighted addition and afterwards AWGN from an optical activation function or from modulation, respectively. This means that we do not need to distinguish between the two design choices in our modeling; we only need to choose the corresponding AWGN term after activation.
The operation of a layer of a feed-forward NN can be modeled by multiplying a matrix \(W\) with an input vector \(x\) (a bias term \(b\) can be absorbed into the matrix-vector product and will therefore suppressed in notation here) and then applying an activation function \(f:\mathbb{R}\rightarrow\mathbb{R}\) element-wise to the result. Symbolically,
\[x\mapsto f(Wx).\]
Now, concretely, the noise model that we study is described by
\[x\mapsto f(Wx+\mathrm{Normal}(0,\Sigma_{\mathrm{w}}))+\mathrm{Normal}(0,\Sigma _{\mathrm{a}}), \tag{1}\]
for each hidden layer of the ONN. Here \(\mathrm{Normal}(0,\Sigma)\) denotes the multivariate normal distribution with mean vector \(0\) and covariance matrix \(\Sigma\). More specifically, \(\Sigma_{\mathrm{w}}\), \(\Sigma_{\mathrm{a}}\) and \(\Sigma_{\mathrm{m}}\) are the covariance matrices associated with weighted addition, application of the activation function, and modulation, respectively. Figure 1 gives a schematic representation of the noise model under study. As we have seen above, in the O/E/O case we have \(\Sigma_{\mathrm{a}}=\Sigma_{\mathrm{m}}\), otherwise \(\Sigma_{\mathrm{a}}\) is due to the specific structure of the photonic activation function. The first layer, regardless of an AO or O/E/O network, sees a modulated input \(x\), i.e., \(x+\mathrm{Normal}(0,\Sigma_{\mathrm{m}})\), and afterwards the same
steps of weighing and applying an activation function, that is (1). Arguably the hidden layers and their noise structure are the most important parts, especially in deep NNs. Therefore, the main equation governing the behavior of the noise propagation in an ONN will remain (1).
### Noise-resistant designs for ONNs
The main contribution of this paper lies in analyzing two noise reduction mechanisms for feed-forward ONNs. The mechanisms are derived from the insight that noise can be mitigated through averaging because of the law of large numbers, and they are aimed at using the enormous bandwidth that photonics offer. The first design (Design A) and its analysis are inspired by recent advancements for NNs with random edges in [33]; the second design (Design B) is new and simpler to implement, but comes without a theoretical guarantee of correctness for nonlinear ONNs, specifically.
Both designs--illustrated in Figure 2--are built from a given NN for which an optical implementation is desired. Each design proposes a larger ONN by taking parts of the original NN, and duplicating and arranging them in a certain way. If noise is absent, then this larger ONN produces the same output as the original NN; and, if noise is present, then this ONN produces an output closer to the desired NN than the direct implementation of the NN as an ONN without modifications would give.
The first mechanism to construct a larger ONN suppressing inherent noise of analog systems starts with a certain number of copies \(N\) of the input data. The copies are all processed independently by (in parallel arranged copies of) the layers. Each copy of a layer takes in multiple input copies to produce the result of weighted addition, to which the activation mechanism is applied. The copies that are transmitted to each layer (or set of parallel arrayed layers) are independent of each other. The independent outputs function as inputs to the upcoming (copies of the) layers, and so on and so forth.
The idea of the second design is to use multiple copies of the input, on which the weighted addition is performed. The noisy products of the weighted addition are averaged to a single number/light beam. This average is then copied and the multiple copies are fed through the activation function, creating multiple noisy activations to be used as the next layer's input, and so on.
### Summary of results
Using Design A, we are able to establish that ONNs posses the same theoretical properties as NNs. Specifically, we can prove that _any_ NN can be approximated arbitrarily well by an ONN built using Design A (Theorem 1). Similar considerations for NNs with random edges can be found in [33], but the noise model and proof method are different. Here, we first bound the deviation of an ONN and a noiseless NN. To this bound Hoeffding's inequality is then applied.
Establishing this theoretical guarantee, however, is done by increasing the number of components exponentially as the depth of the network increases. The current proof shows that for an ONN with Design A meant to approximate a NN with \(L\) layers arbitrarily well (and thus reduce the noise to negligible levels), a sufficient number of components is \(\omega(K^{L(L+1)}L^{L})\) for some constant \(K>0\). This is however not to say that such a large number is necessary: it is merely sufficient.
From a practical viewpoint, however, having to use as few components as possible would be more attractive. We therefore also investigate Design B, in which the number of components increases only linearly with the depth of the network. Because Design A already allows us to establish the approximation property of ONNs, we limit our analysis of Design B to _linear_ NNs for simplicity. We specifically establish in Theorem 2 for any linear NN the exact output distribution of an ONN built using Design B. Similar to the guarantee for Design A in Theorem 1, but more restrictively, this implies that any linear NN can be approximated arbitrarily well by some ONN built using Design B. Strictly speaking, Design B now has no guarantee of correctness for nonlinear NNs, but this should practically not withhold us (especially when activations, for instance, are close to linear).
We conduct numerical experiments with Designs A and B by constructing to LeNet ONNs. The numerical results indicate that in practice, adding some components for noise negation is already sufficient to increase the accuracy of an
Figure 1: Schematic depiction of the noise model of ONNs that we study. First, data \(x\) is modulated onto light. This step adds an AWGN term \(N_{\text{m}}\). This light enters the Photonic Layer, in which a weighted addition takes place, adding AWGN \(N_{\text{w}}\). The activation function is then applied, adding AWGN \(N_{\text{a}}\). The activation function may be applied by photo-detecting the signal of the weighted addition, turning it to a digital signal and applying the activation function on a computer. The result of that action would then be modulated again, to produce the optical output of the photonic neuron. The modulator is thus only required in the first layer, as each photonic neuron takes in light and outputs light.
ONN; an exponential number does not appear not to be necessary (see Figures 3 to 4).
Finally, we want to remark that the high bandwidth of photonic circuits can be exploited to implement the designs as efficiently as possible.
### Outline of the paper
We introduce the AWGN model formally in Section 2. This model is the basis for the analysis of the proposed noise reduction schemes that are next discussed in Sections 3 and 4. There, we specifically define Designs A and B, and each design is followed by a mathematical analysis. The main results are Theorems 1 and 2. Section 5 contain numerical simulations on LeNet ONNs to which we apply Designs A and B. Section 6 concludes; technical details are deferred to the Appendix.
## 2 Model
We consider general feed-forward NNs implemented on analog optical devices. Noise occurs due to various reasons in those optical devices. Reasons include quantum noise in modulation, chip imperfections, and crosstalk [26; 28; 27; 29; 30].
The noise profiles and levels of different devices differ, but we can, to good approximation, expect AWGN to occur at three separate instances [31]: when modulating, when weighting, and when applying an activation function. The thus proposed AWGN model is formalized next in Section 2.1.
### Feed-forward nonlinear ONNs
We assume that our aim is to implement a feed-forward nonlinear NN with domain \(\mathbb{R}^{d_{0}}\) and range \(\mathbb{R}^{d_{L}}\), that can be represented by a parameterized function \(\Psi^{\text{NN}}:\mathbb{R}^{d_{0}}\times\mathbb{R}^{n}\to\mathbb{R}^{d_{L}}\) as follows. For \(\ell=1,\ldots,L\in\mathbb{N}_{+}\), \(\Psi^{\text{NN}}\) must be the composition of the functions
\[\Psi^{\text{NN}}_{\ell}:\mathbb{R}^{d_{\ell-1}}\to\mathbb{R}^{d_{\ell}},\quad x \mapsto\sigma^{(\ell)}\big{(}W^{(\ell)}x+b^{(\ell)}\big{)}.\]
Here, \(W^{(\ell)}\in\mathbb{R}^{d_{\ell}\times d_{\ell-1}}\) denotes the weight matrix in the \(\ell\)-th layer, \(b^{(\ell)}\in\mathbb{R}^{d_{\ell}\times 1}\) the bias vector in the \(\ell\)-th layer, and \(\sigma^{(\ell)}:\mathbb{R}^{d_{\ell}\times 1}\to\mathbb{R}^{d_{\ell}\times 1}\) the activation function in the \(\ell\)-th layer. Specifically, the NN satisfies
\[\Psi^{\text{NN}}(\cdot\,,w)=\Psi^{\text{NN}}_{L}(\cdot\,,w^{(L)})\circ\cdots \circ\Psi^{\text{NN}}_{1}(\cdot\,,w^{(1)}), \tag{2}\]
where \(w^{(\ell)}=(W^{(\ell)},b^{(\ell)})\) represents the parameters in the \(\ell\)-th layer. Note that we do not necessarily assume that the activation function is applied component-wise (it could be any high-dimensional function). Such cases are simply contained within the model.
Suppose now that the NN in (2) is implemented as an ONN, but _without_ amending its design. AWGN will then disrupt the output of each layer. Specifically, for depths \(L\in\mathbb{N}_{+}\), the ONN will be representable by a function \(\Psi^{\text{ONN}}\) that is the composition of the noisy functions
\[\Psi^{\text{ONN}}_{\ell}:\mathbb{R}^{d_{\ell-1}}\to\mathbb{R}^{d_{\ell}}, \quad x\mapsto\sigma^{(\ell)}\big{(}W^{(\ell)}x+b^{(\ell)}+N^{(\ell)}_{\text{ w}}\big{)}+N^{(\ell)}_{\text{a}} \tag{3}\]
Figure 2: (a) Base \(4-3-2\) network, light circles indicate activations, boxes indicate post-activations. (b) Example for Design A with \(2\) layers as input copies to each subsequent layer. The light circles indicate the linear operations/matrix-vector products. The results of the linear operation is averaged (single solid-blue circle) and fed through the activation function, producing the multiple version of the layers output (boxes). (c) Example of Design B.
for \(\ell=1,\ldots,L\in\mathbb{N}_{+}\). Here,
\[N_{\mathsf{w}}^{(\ell)}\stackrel{{\mathsf{(d)}}}{{=}}\mathrm{Normal}( 0,\Sigma_{\mathsf{w}}^{(\ell)})\text{ and }N_{\mathsf{a}}^{(\ell)}\stackrel{{ \mathsf{(d)}}}{{=}}\mathrm{Normal}(0,\Sigma_{\mathsf{act}}^{(\ell)})\]
denote multivariate normal distributions that describe the AWGN within the ONN. In other words, the ONN will satisfy
\[\Psi^{\mathrm{ONN}}(\cdot\,,w)=\Psi^{\mathrm{ONN}}_{L}(\cdot\,,w^{(L)})\circ \cdots\circ\Psi^{\mathrm{ONN}}_{1}(\cdot\,,w^{(1)}) \tag{4}\]
instead of (2). Observe that (4) is a random NN; its outcome is uncertain, but hopefully close to that of (2).
### Feed-forward linear ONNs
Let us briefly examine the special case of a feed-forward _linear_ ONN in more detail. That is, we now assume additionally that for \(\ell=1,\ldots,L\), there exist \(e^{(\ell)}\in\mathbb{R}^{d_{\ell}}\) such that \(\sigma^{(\ell)}(y)=D^{(\ell)}y\) where \(D^{(\ell)}=\mathrm{diag}(e^{(\ell)})\). In other words, each activation function \(\sigma^{(\ell)}\) does element-wise multiplications by constants.
If each activation function is linear, then the output distribution of each layer will remain multivariate normal distributed due to the so-called linear transformation theorem (34, Theorem 1.2.6). The mean and covariance matrix of the underlying multivariate normal distribution will however be transformed in each layer.
Let us illustrate how the covariance matrix transforms by discussing the first layer in detail. Each layer in (3) can be interpreted as a random function that takes the noisy vector \(\mathbf{A}^{(\ell-1)}=(\mathbf{A}_{1}^{(\ell-1)},\ldots,\mathbf{A}_{d_{\ell-1}} ^{(\ell-1)})\) say as input, and produces the even noisier vector \(\mathbf{A}^{(\ell)}=(\mathbf{A}_{1}^{(\ell)},\ldots,\mathbf{A}_{d_{\ell}}^{( \ell)})\) say as output. Specifically, the noisy input to the first layer is modeled by
\[\mathbf{A}^{(0)}\mid x\stackrel{{\mathsf{(d)}}}{{=}}x+\mathscr{N} \left(0,\Sigma_{\mathrm{m}}\right) \tag{5}\]
because of the modulation error within the first layer. Here \(\cdot\mid\cdot\) indicates a conditional random variable. This input next experiences weighted addition and more noise is introduced: the noisy preactivation of the first layer satisfies
\[\mathbf{U}^{(1)}\mid\mathbf{A}^{(0)}\stackrel{{ \mathsf{(d)}}}{{:=}}W^{(1)}\mathbf{A}^{(0)}+b^{(1)}+\mathscr{N}(0,\Sigma_{ \mathsf{w}}^{(1)}). \tag{6}\]
Combining (5) and (6) with the linear transformation theorem for the multivariate normal distribution as well as the fact that sums of independent multivariate normal random variables are again multivariate normally distributed (34, Theorem 1.2.14), we find that
\[\mathbf{U}^{(1)}\mid x \stackrel{{\mathsf{(d)}}}{{=}}W^{(1)}x+b^{(1)}+W^{( 1)}\mathscr{N}\big{(}0,\Sigma_{\mathsf{m}}\big{)}+\mathscr{N}\big{(}0,\Sigma_ {\mathsf{w}}^{(1)}\big{)}\] \[\stackrel{{\mathsf{(d)}}}{{=}}W^{(1)}x+b^{(1)}+ \mathscr{N}\big{(}0,W^{(1)}\Sigma_{\mathrm{m}}(W^{(1)})^{\intercal}+\Sigma_{ \mathsf{w}}^{(1)}\big{)}.\]
After applying the linear activation function, we obtain
\[\mathbf{A}^{(1)}\mid x\stackrel{{\mathsf{(d)}}}{{=}} \sigma_{1}(\mathbf{U}^{(1)})+\mathscr{N}(0,\Sigma_{\mathsf{a}}^{(1)})\mid x\] \[\stackrel{{\mathsf{(d)}}}{{=}}D^{(1)}(W^{(1)}x+b^{(1)})\] \[\quad+\mathscr{N}\Big{(}0,\Sigma_{\mathsf{a}}^{(1)}+D^{(1)}\big{(} W^{(1)}\Sigma_{\mathrm{m}}(W^{(1)})^{\intercal}+\Sigma_{\mathsf{w}}^{(1)} \big{)}(D^{(1)})^{\intercal}\Big{)}\] \[\quad=\mathscr{N}(\Psi^{\mathrm{NN}}_{1}(x,w),\Sigma_{\mathrm{ONN }}^{(1)})\]
say. Observe that the unperturbed network's output remains intact, and is accompanied by a centered normal distribution with an increasingly involved covariance matrix:
\[\Sigma_{\mathrm{ONN}}^{(1)}\] \[=D^{(1)}\big{(}W^{(1)}\Sigma_{\mathrm{m}}(W^{(1)})^{\intercal}+ \Sigma_{\mathsf{w}}^{(1)}\big{)}(D^{(1)})^{\intercal}+\Sigma_{\mathsf{a}}^{(1)}\] \[=D^{(1)}W^{(1)}\Sigma_{\mathrm{m}}(W^{(1)})^{\intercal}(D^{(1)})^ {\intercal}+D^{(1)}\Sigma_{\mathsf{w}}^{(1)}(D^{(1)})^{\intercal}\] \[\quad+\Sigma_{\mathsf{a}}^{(1)}. \tag{7}\]
Observe furthermore that the covariance matrix in (7) is independent of the bias \(b^{(1)}\).
The calculations in eqs. (5) to (7) can readily be extended into a recursive proof that establishes the covariance matrix of the entire linear ONN. Specifically, for \(\ell=1,\ldots,L\), define the maps
\[T^{(\ell)}(\Sigma) :=D^{(\ell)}W^{(\ell)}\Sigma(W^{(\ell)})^{\intercal}(D^{(\ell)})^{\intercal}\] \[\quad+D^{(\ell)}\Sigma_{\mathsf{w}}^{(\ell)}(D^{(\ell)})^{ \intercal}+\Sigma_{a}^{(\ell)}. \tag{8}\]
We then have the following:
**Proposition 1** (Distribution of linear ONNs): _Assume that there exist vectors \(e^{(\ell)}\in\mathbb{R}^{d_{\ell}}\) such that \(\sigma^{(\ell)}(y)=\text{diag}(e^{(\ell)})y\). The feed-forward linear ONN in (4) then satisfies_
\[\Psi^{\mathrm{ONN}}(\cdot,w)\stackrel{{\mathsf{(d)}}}{{=}} \mathscr{N}\big{(}\Psi^{\mathrm{NN}}(\cdot,w),\Sigma_{\mathrm{ONN}}^{(L)} \big{)},\]
_where for \(\ell=L,L-1,\ldots,1\),_
\[\Sigma_{\mathrm{ONN}}^{(\ell)}=T^{(\ell)}(\Sigma_{\mathrm{ONN}}^{(\ell-1)}); \quad\text{and}\quad\Sigma_{\mathrm{ONN}}^{(0)}=\Sigma_{\mathrm{m}}.\]
In linear ONNs with symmetric noise (that is, the AWGN of each layer's noise sources has the same covariance matrix), Proposition 1's recursion simplifies. Introduce \(P^{(\ell)}:=\prod_{i=\ell+1}^{L}D^{(i)}W^{(i)}\) for notational convenience. The following is proved in Appendix A.1.1:
**Corollary 1** (Symmetric noise case): _Within the setting of Proposition 1, assume additionally that for all \(\ell\in\mathbb{N}_{+}\), \(\Sigma_{\mathsf{a}}^{(\ell)}=\Sigma_{\mathsf{a}}\) and \(\Sigma_{\mathsf{w}}^{(\ell)}=\Sigma_{\mathsf{w}}\). Then,_
\[\Sigma_{\mathrm{ONN}}^{(L)} =P^{(0)}\Sigma_{\mathrm{m}}(P^{(0)})^{\intercal}+\sum_{\ell=1}^{L}P ^{(\ell)}\Sigma_{\mathsf{a}}(P^{(\ell)})^{\intercal}\] \[\quad+\sum_{\ell=1}^{L}P^{(\ell)}D^{(\ell)}\Sigma_{\mathsf{w}}(D^ {(\ell)})^{\intercal}\big{(}P^{(\ell)}\big{)}^{\intercal}.\]
_If moreover for all \(\ell\in\mathbb{N}_{+}\), \(W^{(\ell)}=W\), \(D^{(\ell)}=D\), and \(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}}<1\), then_
\[\lim_{L\to\infty}\Sigma_{\mathrm{ONN}}^{(L)}=\sum_{n=0}^{\infty}(DW)^{n}\left(D \Sigma_{\mathsf{w}}D^{\intercal}+\Sigma_{\mathsf{a}}\right)\left((DW)^{n} \right)^{\intercal}.\]
Proposition 1 and Corollary 1 describe the output distribution of linear ONNs completely.
### Discussion
One way to think of the AWGN model in Section 2.1 is to take a step back from the microscopic analysis of individual devices, and consider an ONN as a series of black box devices (recall also Figure 1). Each black box device performs their designated task and acts as communication channels with AWGN. This way of modeling in order to analyze the impact of noise can also be seen in [31]; and other papers modeling optical channels include [28; 27]. Further papers considering noise in optical systems with similar noise assumptions are [35; 36], where furthermore multiplicative noise is considered when an amplifier is present within the circuit [35]. Qualitatively the results for Design A also apply for multiplicative noise, the scaling however may differ.
Limitations of the modelWe note firstly that modeling the noise in ONNs as AWGN is warranted only in an operating regime with many photons, and is thus unlikely to be a good model for ONNs that operate in a regime with just a few photons.
Secondly, due to physical device features and operation conditions, weights, activations, and outputs can only be realized in ONNs if their values lie in certain ranges. Such constraints are no part of the model in Section 2. Fortunately, however, the implied range restrictions are usually not a problem in practice. For example, activation functions like sigmoid and \(\tanh\) map into \([0,1]\) and \([-1,1]\), respectively. Additional regularization rules like weight decay also move the entries of weight matrices in NNs towards smaller values. In case physical constraints were met one can increase the weight decay parameter to further penalize large weights during training, leading to smaller weights so that the ONN is again applicable.
## 3 Results--Design A
### Reducing the noise in feed-forward ONNs (Design A)
Recall that an example of Design A is presented in Figure 2(b). Algorithm 1 constructs this tree-like network, given the desired number of copies \(n_{0},\ldots,n_{L}\) per layer.
```
0:Input \(\mathbf{n}=(n_{\ell})_{\ell=0,\ldots,L}\)
0:\(\prod_{\ell=0}^{L}n_{i}\) copies of input \(x^{(0)}\), named \({}^{1}x^{(0)},\ldots,\big{(}\prod_{\ell=0}^{L}n_{i}\big{)}\,x^{(0)}\) for\(\ell=0,\ldots,L-1\)do for\(\alpha=1,\ldots,\prod_{\ell=1}^{L-1}n_{i}\)do \({}^{\alpha}\xi^{(\ell)}\gets W^{(\ell+1)}{}^{\alpha}x^{(\ell)}+b^{(\ell+1 )}+\mathrm{Normal}(0,\Sigma_{\mathbf{w}})\) endfor for\(\alpha=0,\ldots,\big{(}\prod_{i=\ell}^{L-1}n_{i}\big{)}-1\)do \({}^{\alpha}y^{(\ell)}\xrightarrow{\mathrm{averaging}}n_{i}^{-1}\left({}^{ \alpha-n_{\ell}+1}\xi^{(\ell)}+\cdots+{}^{\alpha-n_{\ell}+n_{\ell}}\,\xi^{( \ell)}\right)\) \({}^{\alpha}x^{(\ell+1)}\leftarrow\sigma^{(\ell)}({}^{\alpha}y^{(\ell)})+ \mathrm{Normal}(0,\Sigma_{\mathbf{a}})\) endfor endfor return\({}^{1}x^{(L)}\)
```
**Algorithm 1** Algorithm to construct a noise reducing network
Observe that in Design A, the number of copies utilized in each layer, the \(n_{\ell}\), are fixed. There is however only a single copy in the last layer. Its output is the unique output of the ONN. Each other layer receives multiple independent inputs. With each of the independent copies weighted addition is performed, and the results are averaged to produce the layer's single output. Having independent incoming copies is achieved by having multiple independent branches of the prior partial networks incoming into a given layer. This means that the single layer \(L\) receives \(n_{L-1}\) independent inputs of \(n_{L-1}\) independent layers \(L-1\). Each of the \(n_{L-1}\) copies of layer \(L-1\) receives \(n_{L-2}\) inputs from independent copies of layer \(L-2\). Generally, let \(n_{\ell-1}\) be the number of copies of layer \(\ell-1\) that act as inputs to layer \(\ell\).
Observe that all copies are created upfront. That means there are \(\prod_{\ell=0}^{L-1}n_{\ell}\) copies of the data. By Algorithm 1, \(\prod_{\ell=1}^{L-1}n_{\ell}\) copies of the first layer are arrayed in parallel to each other, and each of them processes \(n_{0}\) copies of the data. The outputs of the \(\prod_{\ell=1}^{L-1}n_{\ell}\) arrayed copies of the first layer are the input to the \(\prod_{\ell=2}^{L-1}n_{\ell}\) arrayed copies of the second layer, and so on.
Notice that noise stemming from applying the activation function is subject to a linear transformation in the next layer. The activation function noise can therefore be considered as weight-noise by inserting an identity layer with \(\sigma=\mathrm{id}\), \(W=I\) and \(b=0\).
We want to verify that a Design A ONN yields outputs that are with high probability close to the original noiseless NN. Let \(\tilde{\Psi}^{\mathrm{ONN}}(x,w)\) the Design A ONN and then let
\[\mathbb{P}\Big{[}\sup_{x\in\mathbb{R}^{d}}\big{\|}\Psi^{\mathrm{ NN}}(x,w)-\tilde{\Psi}^{\mathrm{ONN}}(x,w)\big{\|}_{2}<D_{L}\Big{]}>1-C_{L}, \tag{9}\]
be the desired property. The main result of this section is the following:
**Theorem 1**: _For any \(C_{L}\in(0,1)\), any \(D_{L}\in(0,\infty)\), and any nonlinear NN \(\Psi^{\mathrm{NN}}\), with Lipschitz-continuous activations functions with Lipschitz-constants \(a^{(i)}\) and weight-matrices \(W^{(i)}\), Algorithm 1 is able to construct an ONN \(\tilde{\Psi}^{\mathrm{ONN}}\) that satisfies (9)._
_Let the covariance matrices of the occurring AWGN be diagonal matrices and let each of the values of the covariance matrices be upper bounded by \(\sigma^{2}\geq 0\). For any set of \((\kappa_{i})_{i=1,\ldots,L}\), \((\delta_{i})_{i=1,\ldots,L}\) such that \(\prod(1-\kappa_{\ell})>1-C_{L}\) and \(\sum\delta_{\ell}\leq D_{L}\), a sufficient number of copies to construct an ONN \(\tilde{\Psi}^{\mathrm{ONN}}\) that satisfies (9) is given by_
\[n_{L} =1\] \[n_{\ell} \geq\frac{\sigma^{2}\Big{(}\prod_{i=\ell+1}^{L}a^{(i)}\prod_{i= \ell+2}^{L}\|W^{(i)}\|_{\mathrm{op}}\Big{)}^{2}}{\delta_{\ell+1}^{2}}\] \[\quad\times\left(\sqrt{2}\frac{\Gamma((d_{\ell+1}+1)/2)}{\Gamma(d_ {\ell+1}/2)}\right.\]
\[\leq a^{(L)}\Big{\|}\frac{1}{n_{L-1}}\sum_{i=1}^{n_{L-1}}\] \[\quad\times\Big{\|}\sum_{i=1}^{n_{L-1}}\Big{(}\tilde{x}^{i}-\sigma^{ (L-1)}\big{(}W^{(L-1)}\big{(}\dots\big{)}+b^{(L-1)}\big{)}\Big{)}\Big{\|}_{2}\] \[\quad+a^{(L)}\Big{\|}\frac{1}{n_{L-1}}\sum_{i=1}^{n_{L-1}}N^{(i)} \Big{\|}_{2}.\]
In the next iteration step the term
\[\Big{\|}\sum_{i=1}^{n_{L-1}}\Big{(}\tilde{x}^{i}-\sigma^{(L-1)}\big{(}W^{(L-1)} \big{(}\dots\big{)}+b^{(L-1)}\big{)}\Big{)}\Big{\|}_{2}\]
is further bounded by first using the triangle inequality and thereafter bounding in the same way as we did in the first layer:
\[\Big{\|}\sum_{i=1}^{n_{L-1}}\Big{(}\sigma^{(L-1)}\Big{(}\frac{1}{n _{L-2}}\sum_{j_{i}=1}^{n_{L-2}}\Big{(}W^{(L-1)}\tilde{x}^{j_{i}}+b^{(L-1)}+N^{ (j_{i})}\Big{)}\Big{)}\] \[-\sigma^{(L-1)}\big{(}W^{(L-1)}\big{(}\sigma^{(L-2)}\big{(}W^{(L- 2)}\big{(}\dots\big{)}+b^{(L-2)}\big{)}\big{)}\] \[\qquad\qquad+b^{(L-1)}\big{)}\Big{)}\Big{\|}_{2}\] \[\leq\frac{a^{(L-1)}\|W^{(L-1)}\|_{\text{op}}}{n_{L-2}}\] \[\quad\times\sum_{i=1}^{n_{L-1}}\Big{\|}\sum_{j_{i}=1}^{n_{L-2}} \Big{(}\tilde{x}^{j_{i}}-\sigma^{(L-2)}\big{(}W^{(L-2)}\big{(}\dots\big{)}+b^ {(L-2)}\big{)}\Big{)}\Big{\|}_{2}\] \[\quad+a^{(L-1)}\sum_{i=1}^{n_{L-1}}\Big{\|}\frac{1}{n_{L-2}}\sum_ {j_{i}=1}^{n_{L-2}}N^{(j_{i})}\Big{\|}_{2}.\]
Here,
\[\sum_{i=1}^{n_{L-1}}\Big{\|}\sum_{j_{i}=1}^{n_{L-2}}\Big{(}\tilde{x}^{j_{i}}- \sigma^{(L-2)}\big{(}W^{(L-2)}\big{(}\dots\big{)}+b^{(L-2)}\big{)}\Big{)} \Big{\|}_{2}\]
may again be bounded in the same fashion. This leads to the following recursive argument.
Let \(\mathscr{F}^{(\ell)}\) be the sum of the differences between-- loosely speaking--the ends of the remaining Design A "subtrees" and noiseless NNs "subtrees" at layer \(\ell\). More specifically, let
\[\mathscr{F}^{(L)} :=\Big{\|}\tilde{x}-\sigma^{(L)}\big{(}W^{(L)}\big{(}\dots\big{)} +b^{(L)}\big{)}\Big{\|};\] \[\mathscr{F}^{(\ell)} :=\sum_{i_{L}=1}^{n_{L-1}}\sum_{i_{L-L}=1}^{n_{L-2}}\dots\sum_{i _{\ell+2}\dots i_{L-1}=1}^{n_{\ell+1}}\Big{\|}\sum_{i_{\ell+1\ell+2\dots i_{ L-1}=1}}^{n_{\ell}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
\[\mathbb{P}\left[\sum_{\ell}\mathscr{S}_{\ell}<D_{L}\right]\geq \mathbb{P}\left[\bigcap_{\ell}\{\mathscr{S}_{\ell}<\delta_{\ell}\}\right]\] \[=\prod_{\ell}\mathbb{P}\Big{[}\mathscr{S}_{\ell}<\delta_{\ell} \Big{]}>\prod_{\ell}(1-\kappa_{\ell})>1-C_{L}.\]
Here, in the first inequality the dependence on \(x\) disappears due to (14).
#### 3.3.2 Bound for deviations
We next consider the \(\mathscr{S}_{\ell}\) for which we want to guarantee that
\[\mathbb{P}\big{[}\mathscr{S}_{\ell}<\delta_{\ell}\big{]}>1- \kappa_{\ell}.\]
Let \(m_{\ell}=\prod_{k=\ell}^{L}n_{k}\). By assumption the \(N_{k}^{(j_{i})}\) are independent and identically \(\mathrm{Normal}(0,\sigma_{k}^{2})\) distributed, where \(\sigma_{k}^{2}\leq\sigma^{2}\), for some common \(\sigma^{2}\). We are lower bounding the number of copies required, therefore using AWGN with higher variance only increases the lower bound, as the calculations below show. We calculate the bound exemplary for \(N^{(j_{i})}\) distributed according to \(\mathrm{Normal}(0,\sigma^{2})\), re-substituting \(\sigma_{k}^{2}\) below in (18) (which is the bound given in Theorem 1) thus covers the case of \(N_{k}^{(j_{i})}\stackrel{{\text{\sf(d)}}}{{=}}\mathrm{Normal}(0, \sigma_{k}^{2})\).
Each component of the vector
\[\sum_{j_{i}=1}^{n_{\ell-1}}N^{(j_{i})}=\Big{(}\sum_{j_{i}=1}^{n_{ \ell-1}}N_{1}^{(j_{i})},\ldots,\sum_{j_{i}=1}^{n_{\ell-1}}N_{d_{\ell}}^{(j_{i} )}\Big{)}^{\intercal}\]
is assumed to be \(\mathrm{Normal}(0,n_{\ell-1}\sigma^{2})=\sqrt{n_{\ell-1}}\sigma\mathrm{Normal }(0,1)\) distributed. It then holds that
\[\sum_{i=1}^{m_{\ell}}\Bigl{\|}\sum_{j_{i}=1}^{n_{\ell-1}}N^{(j_{i })}\Bigr{\|}_{2}\stackrel{{\text{\sf(d)}}}{{=}}\sum_{i=1}^{m_{ \ell}}\sqrt{n_{\ell-1}}\sigma\|\mathrm{Normal}(0,I_{d})\|_{2}.\]
This is a sum of independent chi-distributed random variables, which means they are sub-gaussian (see below that we can calculate the sub-gaussian norm and it is indeed finite). Thus Hoeffding's inequality applies, according to which, for \(X_{1},\ldots,X_{n}\) independent, mean zero, sub-gaussian random variables, for every \(t\geq 0\)
\[\mathbb{P}\Big{[}\Bigl{|}\sum_{i=1}^{N}X_{i}\Bigr{|}<t\Bigr{]}>1- 2\exp\Bigl{(}-\frac{ct^{2}}{\sum_{i=1}^{N}\|X_{i}\|_{\psi_{2}}^{2}}\Bigr{)} \tag{16}\]
holds; see e.g. (37, Theorem 2.6.2). Here \(c>0\) is an absolute constant (see (37, Theorem 2.6.2)) and
\[\|X\|_{\psi_{2}}:=\inf\bigl{\{}t>0:\mathbb{E}[\exp(X^{2}/t^{2})] \leq 2\bigr{\}}.\]
To apply Hoeffding's inequality in our setting, we need to center the occurring random variables. For \(N^{(i)}\sim\mathrm{Normal}(0,I_{d})\), the term \(\|N^{(i)}\|_{2}\) is chi distributed with mean
\[\mu_{d}=\sqrt{2}\frac{\Gamma((d+1)/2)}{\Gamma(d/2)}, \tag{17}\]
where \(\Gamma\) is the gamma function, see e.g. (38, p.238).
Consider
\[\mathbb{P}\left[\frac{\prod_{i=\ell}^{L}a^{(i)}\prod_{i=\ell+1}^{ L}\|W^{(i)}\|_{\mathrm{op}}}{m_{\ell}\sqrt{n_{\ell-1}}}\sigma\sum_{i=1}^{m_{ \ell}}\|N^{(i)}\|_{2}<\delta_{\ell}\right]\] \[=\mathbb{P}\left[\frac{\prod_{i=\ell}^{L}a^{(i)}\prod_{i=\ell+1}^ {L}\|W^{(i)}\|_{\mathrm{op}}}{m_{\ell}\sqrt{n_{\ell-1}}}\sigma\sum_{i=1}^{m_{ \ell}}\Bigl{(}\|N^{(i)}\|_{2}-\mu_{d_{\ell}}\Bigr{)}\right.\] \[\left.\hskip 56.905512pt<\delta_{\ell}-\frac{\prod_{i=\ell}^{L}a^{( i)}\prod_{i=\ell+1}^{L}\|W^{(i)}\|_{\mathrm{op}}}{m_{\ell}\sqrt{n_{\ell-1}}} \sigma m_{\ell}\mu_{d_{\ell}}\right]\]
which equals
\[\mathbb{P}\Bigg{[}\sum_{i=1}^{m_{\ell}} \Big{(}\|N^{(i)}\|_{2}-\mu\Big{)}\] \[<\frac{m_{\ell}\sqrt{n_{\ell-1}}\delta_{\ell}}{\sigma\prod_{i= \ell}^{L}a^{(i)}\prod_{i=\ell+1}^{L}\|W^{(i)}\|_{\mathrm{op}}}-m_{\ell}\mu \Bigg{]}\]
and is lower bounded (compare to (16)) by
\[1-2\exp\left(\frac{-c\Big{(}\frac{m_{\ell}\sqrt{n_{\ell-1}}\delta_{\ell}}{ \sigma\prod_{i=\ell}^{L}a^{(i)}\prod_{i=\ell+1}^{L}\|W^{(i)}\|_{\mathrm{op}}} -m_{\ell}\mu_{d_{\ell}}\Big{)}^{2}}{\sum_{i=1}^{m_{\ell}}C^{2}\Bigl{\|}N^{(i)} \|_{2}-\mu\Big{\|}_{\psi_{2}}^{2}}\right),\]
which in turn is lower bounded by
\[1-2\exp\left(\frac{-c\Big{(}\frac{m_{\ell}\sqrt{n_{\ell-1}}\delta_{\ell}}{ \sigma\prod_{i=\ell}^{L}a^{(i)}\prod_{i=\ell+1}^{L}\|W^{(i)}\|_{\mathrm{op}}} -m_{\ell}\mu_{d_{\ell}}\Big{)}^{2}}{\sum_{i=1}^{m_{\ell}}C^{2}\Bigl{\|}N^{(i)} \|_{2}\Bigr{\|}_{\psi_{2}}^{2}}\right),\]
where \(C>0\) is an absolute constant (see (37, Lemma 2.6.8)). For a chi distributed random variable \(\mathbf{X}\) it holds that
\[\mathbb{E}[\exp(\mathbf{X}^{2}/t^{2})]=M_{\mathbf{X}^{2}}(1/t^{2})\]
where \(M_{\mathbf{X}^{2}}(s)\) is the moment generating function of \(\mathbf{X}^{2}\)--a chi-squared distributed random variable. It is known (see e.g. (39, Appendix 13)) that
\[M_{\mathbf{X}^{2}}(s)=(1-2s)^{-d_{\ell}/2}\]
for \(s<\frac{1}{2}\). Accordingly for \(2<t^{2}\), the property in the definition of the sub-gaussian norm
\[\mathbb{E}[\exp(\mathbf{X}^{2}/t^{2})]=\Big{(}1-2\frac{1}{t^{2}} \Big{)}^{-d_{\ell}/2}\leq 2\]
is satisfied for all \(t\) for which
\[t\geq\max\left\{\sqrt{\frac{4\sqrt[4]{4}}{2\sqrt[4]{4}-2}},\sqrt{2} \right\}=\sqrt{\frac{4\sqrt[4]{4}}{2\sqrt[4]{4}-2}}\]
holds. The square of the sub-gaussian norm of the chi distributed random variables is thus
\[\Bigl{\|}\|N^{(i)}\|_{2}\Bigr{\|}_{\psi_{2}}^{2}=\frac{4\sqrt[4]{4}}{2\sqrt[4] {4}-2}.\]
Substituting the norm into the lower bound yields
\[1-2\exp\left(\frac{-c\Big{(}\frac{m_{\ell}\sqrt{n_{\ell-1}}\delta_{\ell}}{\sigma \prod_{i=a^{(i)}}^{L}\prod_{i=\ell+1}^{\ell-1}\|W^{(i)}\|_{\text{op}}}-m_{\ell} \mu_{d_{\ell}}\Big{)}^{2}}{C^{2}m_{\ell}\frac{4\sqrt{4}}{2}}\right).\]
In order to achieve (15), a sufficient criterion is
\[\frac{\kappa_{\ell}}{2}\geq\exp\left(\frac{-cm_{\ell}\Big{(}\frac{\sqrt{n_{ \ell-1}}\delta_{\ell}}{\sigma\prod_{i=a^{(i)}}^{L}\prod_{i=\ell+1}^{\ell-1}\|W^ {(i)}\|_{\text{op}}}-\mu_{d_{\ell}}\Big{)}^{2}}{C^{2}\frac{4\sqrt{4}}{2}}\right).\]
Solving for \(n_{\ell-1}\) leads to
\[n_{\ell-1}\geq \frac{\sigma^{2}\Big{(}\prod_{i=\ell}^{L}a^{(i)}\prod_{i=\ell+1}^{ L}\|W^{(i)}\|_{\text{op}}\Big{)}^{2}}{\delta_{\ell}^{2}}\] \[\times\left(\sqrt{C^{2}\frac{4\sqrt{4}}{2\sqrt[4]{4}-2}(-\ln( \kappa_{\ell}/2))\frac{1}{cm_{\ell}}}+\mu_{d_{\ell}}\right)^{2}. \tag{18}\]
If we substitute the expression in (17) for \(\mu_{d_{\ell}}\), (18) becomes the bound as seen in Theorem 1.
### Conclusion
Within the context of the model described in Section 2, we have established that any feed-forward NN can be approximated arbitrarily well by ONNs constructed using Design A. This is Theorem 1 in essence.
This result has two consequences when it comes to the physical implementation of ONNs. On the one hand, it is guaranteed that the theoretical expressiveness of NNs can be retained in practice. On the other hand, Design A allows one to improve the accuracy of a noisy ONN to a desired level, and in fact bring the accuracy arbitrarily close to that of any state-of-the-art feed-forward noiseless NNs. Let us finally remark that the high bandwidth of photonic circuits may be of use when implementing Design A.
## 4 Results--Design B
### Reducing noise in feed-forward linear ONNs (Design B)
Recall that an example of Design B is presented in Figure 2(c). Algorithm 2 constructs this network, given a desired number of copies \(m\) in each layer.
```
0:\(m\) copies of input \({}^{1}x^{(0)},\dots,{}^{m}x^{(0)}\) for\(\ell=1,\dots,L\)do for\(\alpha=1,\dots,m\)do \({}^{\alpha}\xi^{(\ell)}\gets W^{(\ell)}\,{}^{\alpha}x^{(\ell-1)}+b^{( \ell)}+\text{Normal}(0,\Sigma_{\mathsf{w}})\) endfor \(y^{(\ell)}\xrightarrow[]{\text{combining}}\,{}^{1}\xi^{(\ell)}+\dots+{}^{m}\, \xi^{(\ell)}\) \(\binom{{}^{1}y^{(\ell)}\dots}{}^{m}y^{(\ell)}\xrightarrow[]{\text{splitting}}\,m^{-1}y^{(\ell)}\) for\(\alpha=1,\dots,m\)do \({}^{\alpha}x^{(\ell)}\leftarrow\sigma^{(\ell)}({}^{\alpha}y^{(\ell)})+\text{Normal}(0, \Sigma_{\mathsf{act}})\) endfor endfor return\(m^{-1}\sum_{\alpha=1}^{m}\,{}^{\alpha}x^{(L)}\)
```
**Algorithm 2** Algorithm to construct a noise reducing network
Calculating the output of a NN by using Design B first requires to fix a number \(m\). The input data \(x^{(0)}\) is then modulated \(m\) times, creating \(m\) noisy realizations of the input \(({}^{\alpha}x^{(0)})_{\alpha=1,\dots,m}\). The weighted addition step and the activation function of each layer are singled out and copied \(m\) times. Both the copies of the weighted addition step and of the activation function of each layer are arrayed parallel to each other and performed on the \(m\) inputs, resulting in \(m\) outputs. The \(m\) parallel outputs of the weighted addition are merged to a single output, and afterwards split into \(m\) pieces. The \(m\) pieces are each send to one of the \(m\) activation function mechanisms for processing. The resulting \(m\) activation values are the output of the layer. If it is the last layer, the \(m\) activation values are merged to produce the final output. These steps are formally described in Algorithm 2. A schematic representation of Design B can be seen in Figure 2(c).
```
0:\(m\) copies of input \({}^{1}x^{(0)},\dots,{}^{m}x^{(0)}\) for\(\ell=1,\dots,L\)do for\(\alpha=1,\dots,m\)do \({}^{\alpha}\xi^{(\ell)}\gets W^{(\ell)}\,{}^{\alpha}x^{(\ell-1)}+b^{(\ell)}+ \text{Normal}(0,\Sigma_{\mathsf{w}})\) endfor \(y^{(\ell)}\xrightarrow[]{\text{combining}}\,{}^{1}\xi^{(\ell)}+\dots+{}^{m}\, \xi^{(\ell)}\) \(\binom{{}^{1}y^{(\ell)}\dots}{}^{m}y^{(\ell)}\xrightarrow[]{\text{splitting}}\,m^{-1}y^{(\ell)}\) for\(\alpha=1,\dots,m\)do \({}^{\alpha}x^{(\ell)}\leftarrow\sigma^{(\ell)}({}^{\alpha}y^{(\ell)})+\text{Normal}(0, \Sigma_{\mathsf{act}})\) endfor endfor return\(m^{-1}\sum_{\alpha=1}^{m}\,{}^{\alpha}x^{(L)}\)
```
**Algorithm 3** Algorithm to construct a noise reducing network
### Analysis of Design B
We now consider the physical and mathematical consequences of Design B.
Observe that in Design B, the \(m\) weighted additions of the \(\ell\)-th layer's input \(x^{(l-1)}\) result in realizations \(({}^{\alpha}\xi^{(\ell)})_{\alpha=1,\dots,m}\) of \(W^{(\ell)}x^{(\ell-1)}+b^{(\ell)}+\text{Normal}(0,\Sigma_{\mathsf{w}})\). These realizations are then combined resulting in
\[mW^{(\ell)}x^{(\ell-1)}+mb^{(\ell)}+\text{Normal}(0,m\Sigma_{\mathsf{w}})+\text{ Normal}(0,\Sigma_{\mathsf{sum}}).\]
Splitting the signal again into \(m\) parts, each signal carries information following the distribution
\[W^{(\ell)}x^{(\ell-1)}+b^{(\ell)} +\text{Normal}(0,m^{-1}\Sigma_{\mathsf{w}}+m^{-2}\Sigma_{\mathsf{ sum}})\] \[+\text{Normal}(0,\Sigma_{\mathsf{spl}}).\]
The mean of the normal distribution therefore is the original networks pre-activation obtained from this input (that is without perturbations). The covariance matrix of the normal distribution is \(m^{-1}\Sigma_{\mathsf{w}}+m^{-2}\Sigma_{\mathsf{sum}}+\Sigma_{\mathsf{spl}}\). Each of those signals is fed through the mechanism applying the activation function, yielding \(m\) noisy versions of the output, distributed according to
\[x^{(\ell)}\mid x^{(\ell-1)}\] \[\stackrel{{(\mathsf{d})}}{{=}}\sigma^{(\ell)}\big{(}W^ {(\ell)}x^{(\ell-1)}+b^{(\ell)}\] \[\qquad\qquad+\text{Normal}(0,m^{-1}\Sigma_{\mathsf{w}}+m^{-2} \Sigma_{\mathsf{sum}}+\Sigma_{\mathsf{spl}})\big{)}\] \[+\text{Normal}(0,\Sigma_{\mathsf{act}}).\]
The effect of Design B is thus that \(T^{(\ell)}(\Sigma)\) in (8) is replaced by
\[T^{(\ell)}_{m}(\Sigma):=\frac{1}{m}D^{(\ell)}W^{(\ell)}\Sigma\big{(}D^{(\ell)}W^ {(\ell)}\big{)}^{\mathsf{T}}+\frac{1}{m}D^{(\ell)}\Sigma_{\mathsf{w}}\big{(}D^ {(\ell)}\big{)}^{\mathsf{T}}\]
\[\leq\frac{1}{m^{2}}D^{(\ell)}\Sigma_{\mathrm{sum}}\big{(}D^{(\ell)} \big{)}^{\intercal}+D^{(\ell)}\Sigma_{\mathrm{spl}}\big{(}D^{(\ell)}\big{)}^{ \intercal}+\Sigma_{\mathrm{a}};\]
see Appendix A.2.2. Observe also that \(\Sigma_{a}^{(\ell)}\) can be written as \((1/m)m\Sigma_{a}^{(\ell)}\). Therefore, if we substitute the matrix \(\bar{\Sigma}_{a}^{(\ell)}=m\Sigma_{a}^{(\ell)}\) for \(\Sigma_{a}^{(\ell)}\) in \(T^{(\ell)}(\Sigma)\), we can write
\[T_{m}^{(\ell)}(\Sigma) =m^{-1}T^{(\ell)}(\Sigma)+m^{-2}D^{(\ell)}\Sigma_{\mathrm{sum}} \big{(}D^{(\ell)}\big{)}^{\intercal}\] \[\quad+D^{(\ell)}\Sigma_{\mathrm{spl}}\big{(}D^{(\ell)}\big{)}^{ \intercal}.\]
We have the following analogs to Proposition1 and Corollary1:
**Theorem 2** (Distribution of Design B): _Assume that there exist vectors \(a^{(\ell)}\in\mathbb{R}^{d_{\ell}}\) such that \(\sigma^{(\ell)}(y)=\text{diag}(a^{(\ell)})y\). The feed-forward linear ONN constructed using Design B with \(m\) copies then satisfies_
\[\Psi_{m}^{\mathrm{ONN}}(\cdot,w)\stackrel{{(\ref{eq:DNN})}}{{=}} \mathrm{Normal}\big{(}\Psi^{\mathrm{NN}}(\cdot,w),\Sigma_{\mathrm{ONN},m}^{(L) }\big{)},\]
_where for \(\ell=L,L-1,\dots,1\),_
\[\Sigma_{\mathrm{ONN},m}^{(\ell)}=T_{m}^{(\ell)}(\Sigma_{\mathrm{ONN},m}^{(\ell -1)});\quad\text{and}\quad\Sigma_{\mathrm{ONN},m}^{(0)}=\Sigma_{\mathrm{m}}.\]
Under the assumption of symmetric noise, a similar simplification of the recursion in Theorem2, similar to that in Proposition1, is possible. Assume \(\Sigma_{\mathrm{sum}}=\Sigma_{\mathrm{spl}}=0\). Introduce again \(P^{(\ell)}:=\prod_{i=\ell+1}^{L}D^{(i)}W^{(i)}\) for notational convenience. The following is proved in AppendixA.2:
**Corollary 2** (Symmetric noise case): _Assume that for all \(\ell\in\mathbb{N}_{+}\), \(\Sigma_{\mathrm{a}}^{(\ell)}=\Sigma_{\mathrm{a}}\) and \(\Sigma_{\mathrm{w}}^{(\ell)}=\Sigma_{\mathrm{w}}\). Then,_
\[\Sigma_{\mathrm{ONN},m}^{(L)} =\sum_{\ell=1}^{L}(m^{-1})^{L-\ell}P^{(\ell)}m\Sigma_{\mathrm{a} }(P^{(\ell)})^{\intercal}\] \[\quad+\sum_{\ell=1}^{L}(m^{-1})^{L-\ell}P^{(\ell)}D^{(\ell)} \Sigma_{\mathrm{w}}(D^{(\ell)})^{\intercal}\big{(}P^{(\ell)}\big{)}^{\intercal}\] \[\quad+(m^{-L})P^{(0)}\Sigma_{\mathrm{m}}(P^{(0)})^{\intercal}.\]
We will next consider the limit of the covariance matrix in a large, symmetric linear ONN with Design B, that we can grow infinitely deep. Algorithm2 is namely able to guarantee boundedness of the covariance matrix in such deep ONN if the parameter \(m\) is chosen appropriately:
**Corollary 3**: _Consider a linear ONN with Design B and parameter \(m\), that has \(L\) layers, and that satisfies the following symmetry properties: for all \(\ell\in\{1,\dots,L\}\), \(W^{(\ell)}=W\), \(D^{(\ell)}=D\), \(\Sigma_{\mathrm{a}}^{(\ell)}=\Sigma_{\mathrm{a}}\) and \(\Sigma_{\mathrm{w}}^{(\ell)}=\Sigma_{\mathrm{w}}\). Then, if \(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}}<\sqrt{m}\), the limit \(\lim_{L\to\infty}\Sigma_{\mathrm{ONN}}^{(L)}\) exists._
_Moreover,_
\[\lim_{L\to\infty}\Sigma_{\mathrm{ONN},m}^{(L)}\] \[=\sum_{n=0}^{\infty}m^{-(n+1)}(DW)^{n}\left(D\Sigma_{\mathrm{w}}D ^{\intercal}+m\Sigma_{\mathrm{a}}\right)\left((DW)^{n}\right)^{\intercal}.\]
Notice that the bound on the number of copies needed for the covariance matrix of an ONN to converge to a limit is independent of e.g. the Frobenius norms of the covariance matrices that describe the noise distributions. This is because, here, we are not interested in bounding the covariance matrix to a specific level; instead, we are merely interested in the existence of a limit.
### Discussion & Conclusion
Compared to Theorem2's recursive description of the covariance matrix in any linear ONN with Design B, Corollary2 provides a series that describes the covariance matrix in any linear, symmetric ONN with Design B. While the result holds more restrictively, it is more insightful. For example, it allows us to consider the limit of the covariance matrix in an extremely deep ONNs (see Corollary3). Corollary3 suggests that in deep ONNs with Design B, one should choose \(m\approx\lceil(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}})^{2}\rceil\) in order to control the noise and not be too inefficient with the number of copies.
These results essentially mean that in a physical implementation of an increasingly deep and linear ONN, the covariance matrix can be reduced (and thus remain bounded) by applying Design B with multiple copies. The quality of the ONN's output increases as the number of copies in Design B (or Design A for that matter) is increased. Finally, it is worth mentioning that Design B could potentially be implemented such that it leverages the enormous bandwidth of optics.
## 5 Simulations
We investigate the improvements of output quality achieved by Designs A and B on a benchmark example: the convolutional neural network LeNet [40]. As measure for quality we consider the Mean Squared Error (MSE)
\[\mathrm{MSE}=\frac{1}{n}\sum_{i=1}^{n}\Big{(}\Psi^{\mathrm{ONN}}(x^{(i)},w)- \Psi^{\mathrm{NN}}(x^{(i)},w)\Big{)}^{2}\]
and the prediction accuracy
\[\frac{\#\{\text{correctly classified images}\}}{\#\{\text{images}\}}.\]
### Empirical variance
We extracted plausible values for \(\Sigma_{\mathrm{w}}\) and \(\Sigma_{\mathrm{a}}\) from the ONN implementation [41] of a 2-layer NN for classification of the Modified National Institute of Standards and Technology (MNIST) database [42]. In [41], the authors trained the NN on a classical computer and implemented the trained weights afterwards in an ONN. They then tuned noise (with the same noise model as in Section2 of this paper) into the noiseless computer model, assuming that \(\Sigma_{\mathrm{w}}=\mathrm{diag}(\sigma_{\mathrm{w}}^{2})\) and \(\Sigma_{\mathrm{a}}=\mathrm{diag}(\sigma_{\mathrm{a}}^{2})\). They found \(\sigma_{\mathrm{w}}\in[0.08,0.1]\cdot d\) and \(\sigma_{\mathrm{a}}\in[0.1,0.15]\cdot d\) to reach the same accuracy levels as the ONN, where \(d\) denotes the diameter of the range.
### LeNet ONN: Performance when implemented via Design A and B
Convolutional NNs can be regarded as feedforward NNs by stacking the (2D or 3D) images into column vectors and arranging the filters to a weight matrix. Thus Design A and B are well-defined for Convolutional NNs. We apply the designs to LeNet5 [40], which is trained for classifying the handwritten digits in the MNIST dataset [42]. The layers are:
1. 2D convolutional layer with kernel size \(5\), stride \(1\) and \(2\)-padding. Output has \(6\) channels of \(28\)x\(28\) pixel representations, with the activation function being \(\tanh\);
2. average pooling layer, pooling \(2\)x\(2\) block, the output therefore is \(14\)x\(14\);
3. 2D convolutional layer with kernel size \(5\), stride \(1\) and no padding. The output has \(16\) channels of \(10\)x\(10\) pixel representations and the activation function is \(\tanh\);
4. average pooling layer, pooling \(2\)x\(2\) block, the output therefore is \(5\)x\(5\);
5. 2D convolutional layer with kernel size \(5\), stride \(1\) and no padding. The output has \(120\) channels of \(1\) pixel representations and the activation function used is \(\tanh\);
6. flattening layer, which turns the \(120\) one-dimensional channels into one \(120\)-dimensional vector;
7. dense layer with \(84\) neurons and \(\tanh\) activation function;
8. dense layer with \(10\) neurons and \(\operatorname{softmax}\) activation function.
Figures 3 and 4 show the MSE and the prediction accuracy of Design A and B for an increasing number of copies, respectively.
For simplicity we set all individual _copies_\(n_{i}\)_per layer_\(i\) in Design A to equal \(m\), that is \(n_{i}=m\) for all \(i\). The total number of copies that Design A starts with then is \(m^{L}\). Here \(L\) is equal to \(7\). In Design B the number of copies is \(m\)_per layer_ and the total number of copies is \(mL\). In the case of one copy the designs A and B are identical to the original network, while we focus on the effect once the designs deviate from the original network (\(m\geq 2\)).
The axis in Figures 3 and 4 denote the number of copies _per layer_. Here, we scale the copies per layer for Design A linearly, because the total amount of copies for Design A grows exponentially and we scale the copies per layer for Design B exponentially, because the total number of copies for Design B grows linearly. This way the comparison is on equal terms.
Figure 3 displays the MSE seen for LeNet, depending on the amount of copies for each design. In the trade-off between additional resources needed for the additional copies against the diminishing benefits of adding further copies, we see that, for both measures MSE (Figure 3) and relative accuracy (Figure 4), already \(2\) to \(5\) copies per layer yield good results. The relative accuracy in Figure 4 is scaled such that \(0\) corresponds to the accuracy of the original NN with noise profile (i.e., the ONN without modifications, we call this the original ONN) and \(1\) to the accuracy of the original NN without noise. The designs do not alter the fundamental operation of the original NN, therefore there should be no performance gain and the original NN's accuracy should be considered the highest achievable, thus constituting the upper bound in relative accuracy of \(1\). Likewise the lowest accuracy should be given by the original ONN, as there is no noise reduction involved.
### Effect of additional layers in LeNet
In order to investigate how the depth affects the noise at the output, while keeping the operation of the network the same to ensure the results are commensurable, we insert additional layers with identity matrix and identity activation function (we will call them identity layers) into a network.
Figure 4: Relative accuracy for Design A (top) and Design B (bottom) as function of copies on LeNet5 trained for MNIST classification. The pale area contains the \(56.5\)%-confidence intervals.
Figure 3: MSE(\(\cdot 10^{2}\)) for Design A (top) and Design B (bottom) as function of copies on LeNet5 trained for MNIST classification. The pale area contains the 95%-confidence intervals.
Specifically, we take networks with the LeNet architecture as in Section 5.2, using different activation functions, while fixing the output layer to be \(\operatorname{softmax}\). We then insert identity layers between layers 1 and 2, 3 and 4, 5 and 6, as well as between layers 6 and 7. For a fixed total of additional layers, the layers are inserted in the four spots between layers \(1\&2\), \(3\&4\), \(5\&6\), and \(7\&8\) according to the tuple
\[n\mapsto\left(\Big{\lfloor}\frac{n+3}{4}\Big{\rfloor},\Big{\lfloor}\frac{n+2} {4}\Big{\rfloor},\Big{\lfloor}\frac{n+1}{4}\Big{\rfloor},\Big{\lfloor}\frac{ n}{4}\Big{\rfloor}\right).\]
The insertion pattern is illustrated in Table 1:
Finally, we tune the variance terms of the covariance matrix in our noise model. The results are displayed in Figure 5.
In Figure 5, we observe that the \(\tanh\) and the \(\operatorname{ReLU}\) networks perform as expected. Additional noisy layers decrease the accuracy and thus the same level of performance can only be achieved if the variance is lower. This trend can also be seen in the linear network, but to a lesser extend.
### Simulations on effective values for Design B
According to Corollary 3, the covariance matrix of a linear \(\mathsf{ONN}\) constructed by Design B is bounded if \(m>\left(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}}\right)^{2}\), and therefore \(m=\lceil(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}})^{2}\rceil\), is sufficient to ensure that the covariance matrix of the output distribution \(\Psi_{m}^{\mathrm{ONN}}(\cdot,w)\stackrel{{(\mathsf{d})}}{{=}} \operatorname{Normal}(\Psi^{\mathrm{NN}}(\cdot,w),\Sigma_{\mathrm{ONN},m}^{(L)})\) in Theorem 2 is bounded in linear \(\mathsf{N}\)Ns. This is derived by using submultiplicativity of the norm (see (A.8)) and is therefore possibly a lose bound. We use the exact relation given by Corollary 2 for the covariance matrix in Theorem 2 to investigate the lowest values for \(m\) for which the covariance matrix starts being bounded. In Figure 6 we depict a linear \(\mathsf{NN}\) with constant width \(4\). We vary the values for \(\|D\|_{\mathrm{F}}\) and \(\|W\|_{\mathrm{F}}\). Upon close inspection we see that the lowest value for \(m\) seems to be \(g(x,y)\simeq\lceil(xy)^{2}/\|I_{d}\|_{\mathrm{F}}^{4}\rceil\), where \(I_{d}\) is the identity matrix of dimension \(d\), see Figure 6. Because \(\|I_{d}\|_{\mathrm{F}}=\sqrt{d}\), the value for \(m\) found numerically is \(m\simeq\lceil(\|D\|_{\mathrm{F}}\|W\|_{\mathrm{F}}/d)^{2}\rceil\).
## 6 Discussion & Conclusion
Design A, introduced in Section 3, guarantees an approximation property (Theorem 1). This is achieved through technical machinery to control the noise, even though there are nonlinear activation functions involved. This method is powerful enough to yield the universal approximation property, as NNs can be approximated arbitrarily well with \(\mathsf{ONN}\)s that
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\#\) of additional layers & 1\&2 & 3\&4 & 5\&6 & 7\&8 \\ \hline
1 & 1 & 0 & 0 & 0 \\
2 & 1 & 1 & 0 & 0 \\
3 & 1 & 1 & 1 & 0 \\
4 & 1 & 1 & 1 & 1 \\
5 & 2 & 1 & 1 & 1 \\
6 & 2 & 2 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Insertion pattern.
Figure 5: Accuracy of LeNet \(\mathsf{ONN}\)s, depending on the amount of inserted identity layers and the variance level of the \(\mathsf{ONN}\), for (a) a network with \(\tanh\) activation function and one copy, (b) a network with \(\operatorname{ReLU}\) activation function and one copy, (c) a network with linear activation function and one copy, (d) a network with \(\tanh\) activation function and two copies, (e) a network with \(\operatorname{ReLU}\) activation function and two copies, (f) a network with linear activation function and two copies.
are constructed through the first design, and NNs themselves can approximate any continuous function arbitrarily well [43, Theorem 1]. Our mathematical guarantee however, only states a sufficient number of copies required, and this number grows exponentially as the number of layers increases.
We then introduced Design B in Section 4, in which the growth of number of copies is much more benign. However, the analysis of Design B was restricted to linear NNs, and Design B might therefore not be expressive enough to have the universal approximation property. Linear NNs, or NNs with algebraic polynomials as activation functions for that matter, namely do not posses the universal approximation property. The assumption of linear activation functions did allow us to characterize the distribution of the output exactly on the flipside (Theorem 2).
In short, in this paper, we have discussed the noise present in ONNs and described a mathematical model for the noise. We also investigate the numerical implications of the the mathematical model, with a specific focus on the effects of depth (Figure 5). The proposed noise reduction schemes yield greater accuracy and the theoretical results (Theorem 1 and Corollary 3) guarantee that ONNs work just as noiseless NNs in the many copies limit. With the designs and findings of Sections 3 to 4 we have a framework to exploit known NN wisdom, as no new training is required. Further research should address optimization algorithms that take the noise of ONNs into account to investigate the regularization, generalization and minimization properties of trained ONNs.
## Acknowledgments
This research was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement no. 945045, and by the NWO Gravitation project NETWORKS under grant no. 024.002.003.
We would finally like to thank Bin Shi for advise on the noise level parameters of ONNs for our simulations. Furthermore we want to thank Albert Senen-Cerda, Sanne van Kempen, and Alexander Van Werde for feedback to a draft version of this document.
|
2304.11717 | Automatized marine vessel monitoring from sentinel-1 data using
convolution neural network | The advancement of multi-channel synthetic aperture radar (SAR) system is
considered as an upgraded technology for surveillance activities. SAR sensors
onboard provide data for coastal ocean surveillance and a view of the oceanic
surface features. Vessel monitoring has earlier been performed using Constant
False Alarm Rate (CFAR) algorithm which is not a smart technique as it lacks
decision-making capabilities, therefore we introduce wavelet
transformation-based Convolution Neural Network approach to recognize objects
from SAR images during the heavy naval traffic, which corresponds to the
numerous object detection. The utilized information comprises Sentinel-1 SAR-C
dual-polarization data acquisitions over the western coastal zones of India and
with help of the proposed technique we have obtained 95.46% detection accuracy.
Utilizing this model can automatize the monitoring of naval objects and
recognition of foreign maritime intruders. | Surya Prakash Tiwari, Sudhir Kumar Chaturvedi, Subhrangshu Adhikary, Saikat Banerjee, Sourav Basu | 2023-04-23T18:09:44Z | http://arxiv.org/abs/2304.11717v1 | # Automated Marine Vessel Monitoring from Sentinel-1 Data Using Convolution Neural Network
###### Abstract
The advancement of multi-channel synthetic aperture radar (SAR) system is considered as an upgraded technology for surveillance activities. SAR sensors onboard provide data for coastal ocean surveillance and a view of the oceanic surface features. Vessel monitoring has earlier been performed using Constant False Alarm Rate (CFAR) algorithm which is not a smart technique as it lacks decision-making capabilities, therefore we introduce wavelet transformation-based Convolution Neural Network approach to recognize objects from SAR images during the heavy naval traffic, which corresponds to the numerous object detection. The utilized information comprises Sentinel-1 SAR-C dual-polarization data acquisitions over the western coastal zones of India and with help of the proposed technique we have obtained 95.46% detection accuracy. Utilizing this model can automatize the monitoring of naval objects and recognition of foreign maritime intruders.
Surya Prakash Tiwari\({}^{l}\), Sudhir Kumar Chaturvedi\({}^{2}\), Subhrangshu Adhikary\({}^{3}\), Saikat Banerjee\({}^{4}\), Sourav Basu\({}^{5}\)\({}^{1}\)Center for Environment & Water, Research Institute, King Fahd University of Petroleum & Minerals, Dhahran, 31261, Kingdom of Saudi Arabia
\({}^{2}\)Department of Aerospace Engineering, University of Petroleum and Energy Studies, Dehradun-248007, India
\({}^{3}\)Dr. B.C. Roy Engineering College, Durgapur-713206, West Bengal, India
\({}^{4}\)Department Of Mechanical Engineering, CubicX, Kolkata-700070, West Bengal, India
\({}^{5}\)Department Of Electrical Engineering, CubicX, Kolkata-700070, West Bengal, India
Synthetic Aperture Radar, Marine Vessel detection, Convolution Neural Network, Artificial intelligence
## 1 Introduction
The surveillance is intended to support efforts related to security, safety, environmental and sustainability aspects. Automatic Identification System (AIS) and Vessel Monitoring System (VMS) are the most reliable options among detection techniques [1]. Other detection types do not require cooperation on the vessel's side, known as the non-cooperative systems. These systems generally utilize imaging sensors for marine object detection [2]. Ship detection using satellite enhance the detection of vessels which often don't need any tracking system installed on it like fishing boats and majorly the boats which have been in the investigated areas with no permission. Airborne and Satellite-borne data may enable the observation of marine objects remotely, independent of ground circumstances [3]. Target detection and monitoring in the maritime environment are imperative to ensure safety and security on the open sea. One of the most famous maritime surveillance services is the automatic identification system (AIS) for the precise positioning of moving ships [4]. However, most of the vessels and small boats are not equipped with AIS transceivers. Moreover, some vessels turn off their receivers to execute illegal activities, making it harder to detect them. This paper presents the vessel detection from the SAR data and Convolution Neural Network (CNN) algorithm for better computational analysis [5]. CNN is a deep learning algorithm which finds different patterns within an image with several feature extraction algorithms like Canny Edge Detection, K-Means Clustering on colour pixels, Grey Level Co-occurrence, etc., and then the most contrasting extracted features are filtered with the max-pooling method and then these features are passed down to a chain of multi-layer perceptron to train the model to detect similar patterns when exposed to a different image [6]. Now when the CNN model finds similar patterns over an image, it is then converted into a max-pooling layer which is then matched with the max-pooling layer from the training data and ultimately the detection results are localized to a specific region based on Single Shot Multibox Detector (SSD) algorithm [7]. Constant False Alarm Rate (CFAR) algorithm is a threshold-based detection approach for vessel detection with SAR data and widely used around the world [8]. However, as the algorithm is threshold-based therefore several objects are misclassified with different threshold values. CFAR is prone to both false negative and false positive error, this is because, in case of higher threshold values, ships with lower reflectance are left out where are other objects with high reflectance to SAR are often labelled as ship even though it might be noise. Therefore our proposed technique can solve the issue by introducing smart decision-making system and improve the overall detection reliability.
## 2 Data and Methods
### Data
In this study, we utilize the Copernicus Sentinel-1 SAR data for sea observation. EADS Astrium GmbH of Germany has structured and created the C-SAR instrument. The instrument is capable of capturing data throughout the day and night on varying landscape conditions [9]. Sentinel-1 data were acquired over the two unique locations of Indian water bodies, including coasts and ports and both locations were considered for the experiment. There are primarily four methods of obtaining Sentinel-1 data: Interferometric Wide (IW), Extra Wide (EW), and Stripmap (SM). Both vertical (V) and horizontal(H) polarization channels can be transmitted by the radar, and either in H or V mode. Hence, a Sentinel-1 procurement is the subset of the accompanying set mixes: single - HH, VV, HV or VH and double - HV+HH or VH+VV. The experiment has been conducted with IW SAR images with VV+VH polarization. A wavelet transformation has been applied to the SAR data to denoise it before passing it toward the CNN model for detection.
The information selected for executing the study comprises the coastline of India. The Sentinel-1 images are of the Mumbai Colaba port to Navi Mumbai port area, situated on the Arabian Sea denoting the west coast, India (Figure 1). The depictions of the Sentinel-1 satellite are gained on 10th January 2020.
### Methods
A Convolution Neural Network (CNN) is a type of deep learning which works on the principle of extraction of contrasting patterns from an image and magnifying the feature to make consistent detection within an image based on similar colour, texture and shapes of objects within the image [10]. In this network several convolution layers are combined with several densely connected layers of neural nodes and the network is trained to identify different patterns from the images. We can depict this graphically as shown in fig. 2.
The input image is first scanned to find several features and augmentations and then passed through multiple convolution layers to identify several patterns from the image. Chain of activation layers are applied to transform the vectors and most significant features are filtered out with max-pooling layer
We have used 1000 vessel data from SAR images and randomly split them into two parts, 750 for training and 250 for validation of the model. To train the model fast and with little training data and around 200 epochs, we have used SSD MobileNet V2 (COCO) weights and biases to initialize the model. SSD MobileNet V2 (COCO) has been trained on over 300,000 and using the weights and biases of this model to our network, a very little amount of data and epochs are required to converge toward the global minima of loss.
## 3 Results and Discussion
Sentinel-1 acquisitions over the Indian coast visible regions and their ports are considered for the investigation of vessel detection utilizing examination for unidentified maritime objects. Figures 3 and 4 present an example of the detected ships/vessels at Mumbai Colaba port to Navi Mumbai port areas with our proposed algorithm. It is noticed that number of objects distinguished in the VV polarization (red circle) either indicating little ships or vessels.
Figure 1: Study location of Mumbai Colaba port to Navi Mumbai port area, INDIA, situated on the Arabian Sea (Source: Copernicus Sentinel data 2020, processed by ESA).
Figure 3: Distribution pattern of the detected ships/vessels at Mumbai Colaba port to Navi Mumbai port area.
Figure 2: Prepared Convolutional Neural network structure.
By the proposed approach, we have conducted the training process with 750 vessels data and tested on all 250 vessels data in test images and the observations are recorded in table 1. We have obtained up to 95.46% detection accuracy. We have a higher recall value than precision indicating a lower chance of missing any vessel. The training took 12953.688 milliseconds to converge into a global minimum and 3.247 milliseconds to detect the vessels and this is a property of deep learning that it takes lots of time to update the weights and biases of the nodes to reduce cost ultimately reaching a global minimum and while generating results, as this is simply a matter of simple arithmetic calculations, detection with deep learning is very fast. From these, we can conclude that although deep learning requires plenty of time to train the model, it is still a preferred choice of detection algorithm as the detection is highly accurate and fast and also the model is trained very fast by initializing our model with SSD MobileNet V2 (COCO) weights and biases.
## 4 Conclusion
Integrated SAR and CNN-based approaches have provided an improved vessel detection with better clarity and precision. Sentinel-1 is a satellite with promising remote sensing capabilities and its SAR-C band could be utilized for automatized detection of marine objects. CFAR is a popular algorithm associated with SAR to detect marine objects however as the algorithm is threshold-based, it fails to recognize objects out of threshold limits. To overcome this, we have proposed a deep learning CNN model which can be used to smartly detect different marine vessels from SAR images with up to 95.46% accuracy within 3.2 milliseconds timeframe.
|
2305.06142 | Feature Expansion for Graph Neural Networks | Graph neural networks aim to learn representations for graph-structured data
and show impressive performance, particularly in node classification. Recently,
many methods have studied the representations of GNNs from the perspective of
optimization goals and spectral graph theory. However, the feature space that
dominates representation learning has not been systematically studied in graph
neural networks. In this paper, we propose to fill this gap by analyzing the
feature space of both spatial and spectral models. We decompose graph neural
networks into determined feature spaces and trainable weights, providing the
convenience of studying the feature space explicitly using matrix space
analysis. In particular, we theoretically find that the feature space tends to
be linearly correlated due to repeated aggregations. Motivated by these
findings, we propose 1) feature subspaces flattening and 2) structural
principal components to expand the feature space. Extensive experiments verify
the effectiveness of our proposed more comprehensive feature space, with
comparable inference time to the baseline, and demonstrate its efficient
convergence capability. | Jiaqi Sun, Lin Zhang, Guangyi Chen, Kun Zhang, Peng XU, Yujiu Yang | 2023-05-10T13:45:57Z | http://arxiv.org/abs/2305.06142v2 | # Feature Expansion for Graph Neural Networks
###### Abstract
Graph neural networks aim to learn representations for graph-structured data and show impressive performance, particularly in node classification. Recently, many methods have studied the representations of GNNs from the perspective of optimization goals and spectral graph theory. However, the feature space that dominates representation learning has not been systematically studied in graph neural networks. In this paper, we propose to fill this gap by analyzing the feature space of both spatial and spectral models. We decompose graph neural networks into determined feature spaces and trainable weights, providing the convenience of studying the feature space explicitly using matrix space analysis. In particular, we theoretically find that the feature space tends to be linearly correlated due to repeated aggregations. In this case, the feature space is bounded by the poor representation of shared weights or the limited dimensionality of node attributes in existing models, leading to poor performance. Motivated by these findings, we propose 1) feature subspaces flattening and 2) structural principal components to expand the feature space. Extensive experiments verify the effectiveness of our proposed more comprehensive feature space, with comparable inference time to the baseline, and demonstrate its efficient convergence capability.
Machine Learning, Graph Neural Networks, Graph Neural Networks
## 1 Introduction
Graph Neural Networks (GNNs) have shown great potential in learning representations of graph-structured data, such as social networks, transportation networks, protein interaction networks, etc. (Fan et al., 2019; Wu et al., 2020; Khoshraffar and An, 2022). In this paper, we focus on node representation learning, which is one of the most important tasks in this line of research, where the key is to represent nodes in an informative and structure-aware way.
There are two different types of graph neural networks. One is spatial, which aggregates information from neighboring nodes and updates the representation of the central node. (Velickovic et al., 2018; Xu et al., 2018; Huang et al., 2020). The spectral type, on the other hand, treats the graph structure matrix, such as the Laplacian matrix, as a transformation for the nodes' attributes (signals) in the spectral domain (Defferrard et al., 2016; Chien et al., 2021; He et al., 2021, 2022). The aim is to develop flexible functions for the graph structure so that the signals of the nodes can fit the labels appropriately.
Recently, several perspectives for analyzing GNN representations have emerged, such as general optimization functions, denoising frameworks, and spectral graph theory (Zhu et al., 2021; Ma et al., 2021; Balcilar et al., 2021). However, as a determinant of representation learning, feature spaces have not been systematically studied for graph neural networks. In general representation learning, performance depends heavily on the construction of feature spaces with accessible data (Bengio et al., 2013).
In this paper, we propose to fill this gap and investigate the feature space for both spatial and spectral GNNs. Specifically, for theoretical investigations, we first abstract a linear approximation of the GNNs following the studies (Wu et al., 2019; Xu et al., 2018; Wang and Zhang, 2022). Then, we decompose the GNN components with and without parameters in the linear approximation, where the latter is considered as a feature space built by node attributes and graph structure (e.g., adjacency or Laplacian matrices), and the former denotes the learnable parameters to reweight the features.
Taking advantage of the convenience of decomposition, we examine the feature space of current models. Motivated by that GNNs are expected to fit arbitrary objective, a more comprehensive feature space reflects better representability without any assumption about the data distribution. However, we find theoretically that the feature subspaces of current GNNs are bounded by the weight-sharing mechanism and the limited dimensionality of the node attributes. In order to alleviate the above restrictions and expand the feature
space, we proposed _1) feature subspace flattening_ and _2) structural principal components_, respectively. Specifically, the former reweights all feature subspaces independently to obtain a fully expressed representation. The latter adds the principal components of graph structure matrices as a "complement" to the original feature space. It is emphasized that our proposal makes no assumptions about the graph or the target, which enjoys good generality. We perform extensive experiments on both homophilic and heterophilic datasets to demonstrate the superiority of the proposal.
Our contributions are listed below:
* Starting from representation learning, we provide the first study of the feature space formed in graph-structured data. Based on this view, we study typical spatial and spectral GNNs and identify two problems of existing GNNs caused by bounded feature spaces.
* We then propose two modifications: 1) feature subspace flattening and 2) structural principal components to expand the whole feature space.
* Extensive experiments are performed on homophilic and heterophilic datasets, and our proposal achieves significant improvements, e.g. an average accuracy increase of 32% on heterophilic graphs.
## 2 Preliminaries
In this paper, we focus on the undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), along with its node attributes of \(\mathcal{V}\) as \(X\in\mathbb{R}^{n\times d}\) and adjacency matrix \(A\in\mathbb{R}^{n\times n}\) to present \(\mathcal{E}\). GNNs take the input of the node attributes and the adjacency matrix, and output the hidden node representations, as \(H=\mathtt{GNN}(X,A)\in\mathbb{R}^{n\times d}\). By default, we employ the cross-entropy loss function in the node classification task to minimize the difference between node label \(Y\) and the obtained representation as \(\mathcal{L}(H,Y)=-\sum_{i}Y_{i}\log\mathrm{softmax}(H_{i})\).
**Spatial GNNs (with non-parametric aggregation)** mostly fall into the message-passing paradigm. For any given node, it essentially aggregates features from its neighbors and updates the aggregated feature,
\[H_{i}^{(k+1)}=\sigma\left(f_{u}\left(H_{i}^{(k)},f_{a}\left(\hat{A}_{ij},H_{j }^{(k)};j\in\mathcal{N}_{i}\right)\right)\right), \tag{1}\]
where \(\sigma\left(\cdot\right)\) is a non-linear activation function, \(H^{(k)}\) indicates the hidden representation in \(k\)-th layer, \(f_{a}\) and \(f_{u}\) are the aggregation and updating functions (Balcilar et al., 2021), \(\hat{A}=(D+I)^{-1/2}(A+I)(D+I)^{-1/2}\) is the re-normalized adjacency matrix using the degree matrix \(D\), and \(\mathcal{N}_{i}\) denotes the \(1\)-hop neighbors.
Here, we provide two examples to specify this general expression. One is the vanilla GCN1 (Kipf Welling, 2017) that adopts the mean-aggregation and the average-update, whose formulation is:
Footnote 1: GCN (Kipf Welling, 2017) is also an instance of spectral GNNs, and here we categorize it into spatial type due to its critical role of bridging spectral GNNs to spatial interpretations.
\[H^{(k+1)}=\sigma\left(\hat{A}H^{(k)}W^{(k)}\right). \tag{2}\]
The second example shows a different update scheme with skip-connection (Xu et al., 2018; Li et al., 2019; Chen et al., 2020), which is defined as follows,
\[H^{(k+1)}=\sigma\left(\alpha^{(k)}H^{(0)}W_{0}^{(k)}+\hat{A}H^{(k)}W_{1}^{(k) }\right), \tag{3}\]
where \(\alpha^{(k)}\) controls the weight of each layer's skip-connection, \(W_{0}^{(k)},W_{1}^{(k)}\) are the transformation weights for the initial layer and the previous one, respectively.
**Spectral GNNs (polynomial-based)** originally employ the Graph Fourier transforms to get filters (Chung and Graham, 1997), such as using the eigendecomposition of the Laplacian matrix: \(\tilde{L}=I-\hat{A}=U\Lambda U^{T}\). In recent years, methods of this type have focused more on approximating arbitrary global filters using polynomials (Wang and Zhang, 2022; Zhu and Koniusz, 2020; He et al., 2021), which has shown superior performance and is written as
\[H=\sum_{k=0}^{K}\gamma^{(k)}P_{k}(\hat{L})\sigma(XW_{1})W_{2}, \tag{4}\]
where \(P_{k}(\cdot)\) donates a polynomial's \(k\)-order term; \(\gamma^{(k)}\) is the adaptive coefficients and \(W_{1},W_{2}\) are learnable parameters.
In this paper, we focus on the typical instances of the two types of graph neural networks, as indicated in the parentheses above. For spatial models, we focus on those with non-parametric aggregation, excluding learnable aggregation such as (Velickovic et al., 2018) in the analytical parts. Considering the spectral type, we focus on polynomial-based models, and other spectral filters are not included in this paper, e.g. (Levie et al., 2018; Thanou et al., 2014). It is worth noting that these cases dominate the state of the art, and we still include other methods in empirical comparisons.
**Primary observation.** From the review of GNN models, we can conclude that usually, the node attributes \(X\) and the graph structure matrices \(\hat{L}/\hat{A}\) are computed first and then some parameter matrices are applied to obtain the final node representation. Starting from this observation, in the following, we extract a linearly approximated framework including GNNs that first construct feature spaces and then apply parameters to reweight them to obtain node representations.
## 3 Analysis
To perform theoretical investigations of the feature space, we abstract a linear approximation of GNNs based on the success of linearisation attempts of Wu et al. (2019); Xu et al. (2018); Wang and Zhang (2022). Specifically, we provide a general formulation for the linear approximation of arbitrary graph neural networks. \(\overline{\mathfrak{G}\mathfrak{N}}(X,\hat{A})\) as:
\[H=\overline{\mathfrak{G}\mathfrak{N}}(X,\hat{A})=\sum_{t=0}^{T-1}\Phi_{t}(X, \hat{A})\Theta_{t}, \tag{5}\]
where \(\Phi_{t}(X,\hat{A})\in\mathbb{R}^{n\times d_{t}}\) is the non-parametric feature space constructing function that inputs the graph data (e.g., node attributes and graph structure) and outputs a feature subspace, \(\Theta\in\mathbb{R}^{d_{t}\times c}\) is the parameter space to reweight the corresponding feature subspace for each class \(c\), and \(T\) is a hyper-parameter of the number of the feature subspaces that the GNN contains. In general, in this linear approximation, a GNN model forms \(T\) feature subspaces, i.e., \(\Phi_{t}\), and outputs the addition of all the reweighted subspaces using the respective parameters \(\Theta_{t}\). Note that the (total) feature space is the union of the subspaces as \(\Phi=\{\Phi_{t}\}_{t=0,1,\cdots,T-1}\). Similarly, we have the (total) parameters \(\Theta=\{\Theta_{t}\}_{t=0,1,\cdots,T-1}\). Besides, the number of the subspaces \(T\) that a GNN model obtains is not parallel with its layer/order, for which we will provide some examples in the later revisiting subsection.
In the following, we will first identify the feature space \(\Phi\) and the parameters \(\Theta\) for existing GNNs. Then, from the perspective of the feature space, we analyze the common mode across different model lines. Finally, we investigate and summarize the problems behind this mode.
### Revisiting Existing Models
**Spatial GNNs (with non-parametric aggregation).** We first transform the recursive formula of spatial GNNs, e.g., (1), into an explicit formula, by iterating from the initial node attributes that \(H^{(0)}=X\) and ignoring the activation function. Following Section 2, we consider two examples of spatial GNNs: the vanilla GCN (Kipf and Welling, 2017) and the one with skip-connections (Xu et al., 2018).
The linear approximated explicit formula of a \(K\)-layer is:
\[H^{(K)}=\hat{A}^{K}X\prod_{i=0}^{K-1}W^{(i)}, \tag{6}\]
which forms single feature space \(\Phi_{0}=\hat{A}^{K}X\) and parameters \(\Theta_{0}=\prod_{i=0}^{K-1}W^{(i)}\) with \(T=1\). It is a perfect example that the number of GNN layers \(K\) is not identical to the number of the subspaces \(T\) a GNN forms. While (3) furthermore considers skip-connections, whose \(K\)-layer linear approximated explicit formula is formualted as:
\[\begin{split} H^{(K)}&=\sum_{i=0}^{K-1}\hat{A}^{i}X \alpha^{(K-1-i)}W_{0}^{(K-1-i)}\prod_{j=K-i}^{K-1}W_{1}^{(j)}\\ &+\hat{A}^{K}X\prod_{h=0}^{K-1}W_{1}^{(h)}.\end{split} \tag{7}\]
By this decomposition, this GCN with skip-connections consists of \(T=K+1\) feature subspaces. It forms each feature subspace as \(\Phi_{t}=\hat{A}^{t}X\). For the first \(T-1\) subspaces, the according respective parameters is denoted as \(\Theta_{t;t<T-1}=\alpha^{(K-1-t)}W_{0}^{(K-1-t)}\prod_{j=K-t}^{T-1}W_{1}^{(t)}\), and for for the last \(\Phi_{T}\), the parameter is \(\Theta_{T}=\prod_{h=0}^{T-1}W_{1}^{(h)}\). Please refer to Appendix A.1 for the derivation.
**Spectral GNNs (polynomial-based).** This type is specified by the explicit formula as (4). We remove the activation function, and obtain the linear approximation of a \(K\)-order
\begin{table}
\begin{tabular}{l l l} \hline \hline & Original formula\({}^{*}\) & Linear approximation formulations \\ \hline GCN & \(H^{(k+1)}=\sigma\left(\hat{A}H^{(k)}W^{(k)}\right)\) & \(H^{(K)}=-\hat{A}^{K}X\prod_{h=0}^{K-1}W^{(i)}\) \\ (Kipf and Welling, 2017) & \(H^{(k+1)}=\sigma\left((\hat{A}^{(k)}I+\hat{A})H^{(k)}W_{0}^{(k)}\right)W_{1}^{ (k)}\) & \(H^{(k)}=-\sum_{h=0}^{K-1}\hat{A}^{K}\sum_{\{\alpha^{\prime},\alpha^{\prime}, \alpha^{\prime},\alpha^{\prime},\alpha^{\prime},\alpha^{\prime},\alpha^{\prime}, \alpha
spectral GNNs as:
\[H^{(K)}=\sum_{k=0}^{K}P_{k}(\hat{L})X\gamma^{(k)}W^{(0)}. \tag{8}\]
We put the learnable polynomial coefficient \(\gamma^{(k)}\) together with the parameter matrices. We also combine the shared parametric matrices in (4) as \(W^{(0)}=W_{1}W_{2}\). In this way, (8) forms \(T=K+1\) feature subspaces, each denoted as \(\Phi_{t}=P_{t}(\hat{L})X\), and the parameters used to reweight the respective subspaces are \(\Theta_{t}=W^{(0)}W^{(1)}\). See Appendix A.2 for further derivations.
In Table 1, we summarize typical instances of spatial and spectral methods, using different colors to distinguish the feature space \(\Phi\) (orange) and the parameters \(\Theta\) (blue). It shows that the proposed extracted view (5) can support most methods in both spatial and spectral domains.
### Analysis of the Feature Space
In this section, we first give a theoretical argument that the feature subspace of current GNNs obeys asymptotically linear correlation (see Proposition 3.1). Then, we find that the current weight-sharing mechanism weakens the expressiveness of the feature space when the strict linear correlation is satisfied (see Theorem 3.2). In the remainder, we analyze the case where the feature subspaces do not obey strict linear correlation. We find that the feature space is insufficient when the dimensionality of the node attributes is limited and no assumptions can be made about the feature construction (e.g., heterophily).
Table 1 shows that the feature space \(\Phi\) in spatial and spectral GNNs is formulated by the multiplication of graph structure matrices function and node attributes, e.g. \(\Phi_{k}=P_{k}(\hat{L})X\). We consider \(\hat{A}^{k}X\) to be the basic element for each feature subspace since other forms such as \(P_{k}(\hat{L})X\) and \(\hat{L}\) are all linear transformations of \(\hat{A}\). Given this, it can be concluded that the feature subspaces of GNNs are sequentially appended as the spatial layers or the spectral orders of GNNs increase, with the latter subspace being the result of applying the aggregation function to the former.
This monotonous construction of the feature space will lead to a high linear correlation between each feature subspace as presented in Proposition 3.1.
**Proposition 3.1**.: _(its proof can be found in Appendix A.3) Suppose the feature subspaces are constructed sequentially by \(\{\Phi_{t}=\hat{A}^{t}X\}_{t}\). As \(i\in\mathbb{Z}\) increases, the subspace \(\Phi_{t+i}\) gradually tends to be linearly correlated with \(\Phi_{t}\)._
To better understand the property of Proposition 3.1, we provide a quantitative demonstration using the feature space of GPRGNN (Chien et al., 2021) as an example. We measure the linear correlation of the appended \(k\)-th feature (sub)space with the previous ones by calculating the mutual correlation values:
\[E_{i}^{k}=\max_{j=0,\cdots,k-1}\mu(\hat{L}^{j}X,\hat{L}^{k}X_{.i}), \tag{9}\]
where \(i\) is the index of the column in \(\hat{L}^{k}X\), and \(\mu(M_{0},M_{1})=\max_{d_{u}\in M_{0},d_{v}\in M_{1}}\cos(d_{u},d_{v})\) is the mutual-coherence of two matrices, based on the cosine distance \(\cos\). In Figure 1, we visualize the distribution of \(\{E_{i}^{k}\}\) of all the columns with \(k=1,2,3,4\). It confirms that the linear correlation improves significantly with the number of subspaces. Therefore, both theoretical discussions and visualizations show a trend of increasingly linear correlations between feature subspaces with an increasing number of GNN layers/orders.
Since we are dealing with a gradually linear correlation, in the following we identify two questions about the current feature construction when this condition is strictly fulfilled or not, respectively.
**Issue 1: Constraint from the weight sharing mechanism.** From Table 1 we see that existing GNNs usually share parameter weights between different subspaces. Under the condition of linear correlation, we provide a direct argument that using the weight-sharing method limits the expressiveness of the feature space.
**Theorem 3.2**.: _(its proof can be found in Appendix A.4) Suppose \(\Phi_{a},\Phi_{b}\in\mathbb{R}^{n\times d}\) are two linearly correlated feature subspaces, i.e. there exists \(W_{a}\in\mathbb{R}^{d\times d}\) such that \(\Phi_{a}W_{a}=\Phi_{b}\), and suppose a matrix \(B\in\mathbb{R}^{n\times c}\), \(c<<d\). If \(B\) can be represented by using both subspaces with a common weight \(W_{B}\), i.e., \(\gamma_{a}\Phi_{a}W_{B}+\gamma_{b}\Phi_{b}W_{B}=B\) and \(\gamma_{a},\gamma_{b}\in\mathbb{R}\), then \(B\) can always be represented by only one subspace \(\Phi_{a}\), i.e., \(\Phi_{a}W_{B}^{\prime}=B\) and \(W_{B}^{\prime}\in\mathbb{R}^{d\times c}\)._
It shows an expressiveness bound of the feature subspaces when they are linearly correlated. While this linearly dependency can be used as redundancy and is widely used in
Figure 1: Distribution of the mutual correlation values between the later feature (sub)spaces to the previous total ones.
areas such as dictionary learning (Elad, 2010). In particular, an over-determined linear system can be relaxed to an under-determined one when linearly correlated columns are added to the regressor matrix, making it easier to optimize. However, the condition of this benefit is not met in existing GNNs due to the weight-sharing mechanism. In the next section, we propose a modification to break this constraint.
Issue 2: Constraint from limited dimensionality of node attributes.Proposition 3.1 clarifies the tendency of linear correlation between the feature subspaces, but in the first few, this property may not be strictly satisfied. It makes the weight-sharing not so flawed and also weakens the effectiveness of the corresponding modification. In the following, we give a discussion of the condition that the feature spaces are not necessarily linearly correlated, and we consider two limiting scenarios of the dimension \(d\) of the node attributes, since the respective discussions are strongly orthogonal.
First, if \(d\to n\), \(X\) itself is a sufficient feature space if \(X\) is sparse (e.g., bag-of-words features) or each column is linearly independent. Besides, the weight-sharing method sums different feature subspaces by learnable weights \(\gamma\) for each. As a result, the optimized sum promotes more of the subspaces that are closer to the labels. Therefore, the core is the assumptions of subspace construction (e.g., homophilic assumptions and hyperparameter search for aggregation functions) and more flexible polynomial functions (e.g., Chebyshev and Bernstein). They all have been widely studied, where the performance of weight-sharing methods is convincing, such as (Wang & Zhang, 2022).
On the other hand, when \(d<<n\), the node attributes \(X\) have a thin shape, making the regression system strictly over-determined. In addition, without any assumption about the feature space construction, there is hardly an exact solution, even when using linearly correlated copies to perform the expansion (which we will discuss in the next section). Therefore, compared to homophilic graphs, heterophilic graphs are a more severe case, especially since there is hardly any assumption about the feature space construction. For this situation, we propose the other modification below.
## 4 Proposed Method
Here, we first propose _1) feature subspace flattening_ for the detrimental condition caused by the weight-sharing mechanism given by Theorem 3.2 and Proposition 3.1. Then, to compensate for the second case when the dimensionality of the node attributes is limited, we propose _2) structural principal components_ to expand the original feature space.
### Modification 1: Feature Subspaces Flattening
For the first problem, we encourage each feature subspace to be independently expressed of each other in the model and weight them separately. Before giving supporting proof, we provide an illustration of this modification in Figure 2. The benefit of this proposal is given in the following.
**Theorem 4.1**.: _(its proof can be found in Appendix A.5) Suppose \(\Phi_{a},\Phi_{b}\in\mathbb{R}^{n\times d}\) are two linearly correlated feature subspaces, i.e., there exits \(W_{a}\in\mathbb{R}^{d\times d}\) such that \(\Phi_{a}W_{a}=\Phi_{b}\), and a matrix \(B\in\mathbb{R}^{d\times c}\), \(c<<d\). If \(B\) can be expressed by \(\Phi_{a}\), i.e., \(\Phi_{a}W_{B}=B\), then using both subspaces \(\Phi_{a}\) and \(\Phi_{b}\) independently, i.e., \(\Phi_{a}W_{a}+\Phi_{b}W_{b}=B\), the optimum is more easily achieved than a weight-sharing style._
It follows from this theorem that feature subspace flattening is more effective than weight-sharing methods when the feature subspaces tend to be linearly correlated. For further discussion on parameter matrices that are stacked (e.g., GCN), please refer to Appendix A.7, where the same conclusion is maintained.
### Modification 2: Structural Principal Components
Next, we consider the second issue that the dimension of the node attributes is limited. We propose to expand the whole feature space by introducing other feature subspaces.
There are two criteria that we consider. One is that the introduced feature subspace should be highly uncorrelated with the existing feature subspace, otherwise, according to the analysis in the previous section, it may not be the same as the previously proposed modification under this condition. The other is that the dimension of the introduced subspace should not be too large, otherwise noise might be included and the computational cost would also increase.
Considering this, for graph-structured data, there are two data matrices, i.e., node attributes and graph structure matrices, and the former has been explicitly exploited as one of the feature subspaces in most GNN models, as summarized
Figure 2: Architecture of our proposal
in Table 1. On the contrary, structure matrices are used only in combination with node attributes. Given these conditions, we propose to construct the expansion subspace using the truncated SVD of the structural matrix, called _structural principal components_ as modification 2. It naturally provides orthogonal subspaces, and the truncated form limits the dimension of the matrices. Thus, two criteria are met. Specifically, we extract the low-dimensional information for the graph structure to obtain its principal components:
\[S=\tilde{Q}\tilde{V};\hat{A}=QVR^{T}, \tag{10}\]
where \(\tilde{Q},\tilde{V}\) are the principal components and the corresponding singular values. Besides, we prove the effectiveness of this modification in the following theorem.
**Theorem 4.2**.: _(its proof can be found in Appendix A.6) Suppose the dimensionality of the node attributes is much smaller than the number of nodes, i.e., \(d<<n,X\in\mathbb{R}^{n\times d}\), and a \(z\)-truncated SVD of \(\hat{L}\), which satisfies \(||U_{z}S_{z}-\hat{L}||_{2}<\epsilon\), where \(\epsilon\) is a sufficiently small constant. Then the linear system \((\Phi_{k},U_{z}S_{z})W_{B}^{\prime}=B\) can achieve a miner error than the linear system \(\Phi_{k}W_{B}=B\)._
So far, we have introduced two modifications, and together they contribute to a new proposed GNN model called Feature Expanded Graph Neural Network (FE-GNN). It is shown in Figure 2 and formulated as follows,
\[H=\sum_{k=0}^{K}P_{k}(\hat{L})XW^{(k)}+SW_{s}. \tag{11}\]
It constructs the feature space in two ways. The first part inherits the polynomial-based GNNs and takes advantage of the multiplication of the polynomials of the structural matrix \(P_{k}(\hat{L})\) and the node attributes \(X\). Second, we use the principal components of the structural matrices to form another feature subspace \(S\). In addition, FE-GNN uses independent parameter matrices \(W^{(k)}\) and \(W_{s}\) to reweight each feature subspace to provide flexible reweighting.
In particular, the \(\Phi_{k}\) and \(S\) feature spaces can have unbalanced scales and lead to poor reweighting. Therefore, we add a column-wise normalization to ensure that each column of each feature subspace contributes equally to the whole feature space. Finally, to better verify the importance of the feature space view, our implementation is purely linear without any deep artifices such as activation functions or dropout except for the cross-entropy loss, while we obey the original implementation for the baselines and their non-linearity is preserved.
### Discussion
Our analysis and proposal make few assumptions about the distribution of node attributes, the graph structure, or even their relationships. Since the point view of the feature space is not an analysis method proposed specifically for graph structure data or GNNs, our investigations are more general and jump out of the limitations of other views. For example, graph denoising (Zhu et al., 2021) and spectral graph theory view (Balcilar et al., 2021) both ignore the property of node attributes, which is the key for our second proposal, and instead focus on the transformations of the structure matrices. We provide a more comprehensive comparison of related work in section 5.
In our proposed method, the first modification that flattens the feature subspace improves the effectiveness of each feature subspace, but the number of parameters must be higher because no parameter matrices are shared. In experiments, we surprisingly found relatively low training time costs compared to baselines. Furthermore, the second modification can be misinterpreted as an arbitrary addition of
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Type} & \multirow{2}{*}{Baseline} & \multirow{2}{*}{Time (ms)} & \multicolumn{6}{c}{Homophilic graphs} & \multicolumn{6}{c}{Heterophilic graphs} \\ \cline{3-10} & & & Cora & CiteSeer & PubMed & Computers & Photo & Squirrel & Chameleon & Actor \\ \hline \multirow{4}{*}{Spatial} & MLP & - & 76.70\(\pm\)0.15 & 76.67\(\pm\)0.28 & 85.11\(\pm\)0.26 & 82.62\(\pm\)0.21 & 84.16\(\pm\)0.13 & 37.86\(\pm\)0.39 & 57.83\(\pm\)0.31 & 38.99\(\pm\)0.17 \\ & GCN & 17.42\(\pm\)1.64 & 87.69\(\pm\)0.00 & 79.31\(\pm\)0.46 & 86.71\(\pm\)0.18 & 83.24\(\pm\)0.11 & 88.61\(\pm\)0.36 & 47.21\(\pm\)0.59 & 61.85\(\pm\)0.38 & 28.61\(\pm\)0.39 \\ & GAT & 18.06\(\pm\)1.18 & 88.07\(\pm\)0.41 & 80.80\(\pm\)0.25 & 86.69\(\pm\)0.14 & 82.86\(\pm\)0.35 & 90.84\(\pm\)0.22 & 33.40\(\pm\)0.16 & 51.82\(\pm\)1.33 & 33.48\(\pm\)0.35 \\ & GraphSAGE & 10.72\(\pm\)0.25 & 87.74\(\pm\)0.41 & 79.20\(\pm\)0.42 & 87.65\(\pm\)0.14 & 87.38\(\pm\)0.13 & 93.59\(\pm\)0.13 & 48.15\(\pm\)0.48 & 62.45\(\pm\)0.48 & 36.39\(\pm\)0.35 \\ & GCNII & 8.48\(\pm\)0.24 & 87.46\(\pm\)0.31 & 80.76\(\pm\)0.30 & 88.82\(\pm\)0.08 & 84.75\(\pm\)0.22 & 93.21\(\pm\)0.25 & 43.28\(\pm\)0.35 & 61.80\(\pm\)0.44 & 38.61\(\pm\)0.26 \\ & APPNP & 23.74\(\pm\)0.28 & 87.92\(\pm\)0.20 & 81.42\(\pm\)0.26 & 88.16\(\pm\)0.14 & 58.88\(\pm\)0.13 & 90.40\(\pm\)0.34 & 39.63\(\pm\)0.49 & 59.01\(\pm\)0.48 & 39.90\(\pm\)0.25 \\ \hline \multirow{4}{*}{Spectral} & ChebyNet & 20.26\(\pm\)1.63 & 87.17\(\pm\)0.19 & 77.97\(\pm\)0.36 & 89.04\(\pm\)0.08 & 87.92\(\pm\)0.13 & 94.58\(\pm\)0.11 & 44.55\(\pm\)0.28 & 64.06\(\pm\)0.47 & 25.55\(\pm\)1.67 \\ & GPGRNN & 23.55\(\pm\)1.35 & 87.97\(\pm\)0.24 & 78.57\(\pm\)0.31 & 89.11\(\pm\)0.08 & 86.07\(\pm\)0.14 & 93.99\(\pm\)0.11 & 43.56\(\pm\)0.22 & 63.67\(\pm\)0.34 & 36.93\(\pm\)0.26 \\ & BernNet & 36.88\(\pm\)0.44 & 87.66\(\pm\)0.26 & 79.34\(\pm\)0.32 & 89.33\(\pm\)0.27 & 88.66\(\pm\)0.08 & 94.03\(\pm\)0.08 & 44.47\(\pm\)0.39 & 63.07\(\pm\)0.43 & 36.89\(\pm\)0.30 \\ \hline \multirow{4}{*}{Unified} & GNN-LF & 52.77\(\pm\)4.50 & 88.12\(\pm\)0.06 & **83.66\(\pm\)0.06** & 87.79\(\pm\)0.05 & 87.63\(\pm\)0.05 & 93.79\(\pm\)0.06 & 39.03\(\pm\)0.08 & 59.84\(\pm\)0.09 & 41.97\(\pm\)0.06 \\ & GNN-HF & 53.28\(\pm\)4.51 & 88.47\(\pm\)0.09 & 83.56\(\pm\)0.10 & 87.83\(\pm\)0.10 & 86.94\(\pm\)0.06 & 93.89\(\pm\)0.10 & 39.01\(\pm\)0.51 & 63.90\(\pm\)0.11 & **42.47\(\pm\)0.07** \\ & ADA-UGNN & 14.36\(\pm\)0.27 & 88.92\(\pm\)0.11 & 79.34\(\pm\)0.09 & 90.08\(\pm\)0.05 & 89.56\(\pm\)0.09 & 94.66\(\pm\)0.07 & 44.58\(\pm\)0.16 & 59.25\(\pm\)0.16 & 41.38\(\pm\)0.12 \\ \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2} \cline{2-2}{2} \cline{2-2} \cline{2-2} \cline{2-2}{2-2} \cline{2-2} \cline{2-2} \cline{2-2}{2-2} \cline{2-2}{2-2} \cline{2-2} \cline{2-2}{2-2} \cline{2-2} \cline{2-2}{2-2} \cline{2-2}{2-2} \cline{2-2} \cline{2-2}{2-2} \cline{2-2} \cline{2-2-2
structural information. With this in mind, we will conduct additional experiments with other information extraction methods. Besides, in our approach, the aggregation process can be abstracted as preprocessing to further speed up the training process. We leave this as future work; in our experiments, aggregations are computed during each training session for a fair comparison.
Finally, it is worth noting that the linear approximation is adopted for the non-linear GNNs for the following reasons: 1) it allows us the convenience of a deeper view of the GNN models, 2) linearization is a quite normal setting in the theoretical analysis in general machine learning and deep learning analysis, since the non-linearity can hardly provide rigorous arguments (Wu et al., 2019; Maddox et al., 2021), and 3) the proposed model is fully consistent with the analytical view of linearity.
## 5 Other Related Work
Unified perspectives for GNNs.There have been several perspectives for studying GNNs representations. Balcilar et al. (2021) first bridge the spatial methods to the spectral ones, that they assign most of the spatial GNNs with their corresponding graph filters. More specifically, they begin with GNN models' convolution matrix and then summarize their frequency responses. Ma et al. (2021) regard the aggregation progress of GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), and APPNP (Klicpera et al., 2019) as graph signal denoising problem, which aims to recover a clean signal from an objective and a certain denoising regularization term. Given this, the authors consider generalize the smoothing regularization term to and propose ADA-UGNN. However, it also ignores the property of node attributes but focusing on the flexibility of the denoising functions. Zhu et al. (2021) give a more comprehensive summary of GNNs from an optimization view, which partly overlaps with (Ma et al., 2021)'s opinions of graph signal denoising. They propose GNN-LF/HF with adjustable objective that behaves as a low/high-pass filter.
hidden dimension) and discusses the correlation between the columns of the feature subspaces, while over-smoothing investigations usually focus on the row-wise perspective (i.e., node dimension) and consider the similarity of the output representations of the nodes. They are obviously different, and a further study of these two should also be an interesting future work.
## 6 Experiments
We evaluate FE-GNN2 on: (1) node classification results, (2) ablation studies, and (3) efficiency check.
Footnote 2: Our code is available at [https://github.com/sajqavril/Feature-Extension-Graph-Neural-Networks.git](https://github.com/sajqavril/Feature-Extension-Graph-Neural-Networks.git)
Dataset.We implement our experiments on homophilic datasets, i.e., Cora, CiteSeer, PubMed, Computers, and Photo (Yang et al., 2016; Shchur et al., 2018), and heterophilic Chameleon, Squirrel, and Actor (Rozemberczki et al., 2021; Pei et al., 2020). Among them, Chameleon and Squirrel are two cases that bear problem 2, i.e., limited dimension of node attributes and no homophilic assumptions to use. More details can be found in the appendix B.
Baselines.We compare a list of state-of-the-art GNN methods. For spatial GNNs we have GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), GraphSAGE (Hamilton et al., 2017), GCNII (Chen et al., 2020) and APPNP (Klicpera et al., 2019), where MLP is included as a special case. For spectral methods we take ChebyNet (Defferrard et al., 2016), GPRGNN (Chien et al., 2021) and BernNet (He et al., 2021). We also cover the recent unified models, ADA-UGNN (Ma et al., 2021) and GNN-LF/HF (Zhu et al., 2021). FE-GNN uses the Chebyshev or monomial polynomials to construct the feature space, and we refer to the corresponding versions as FE-GNN (C) and FE-GNN (M), respectively. Please refer to Appendix B.2 for more details on the implementation.
### Node Classification
We test on the node classification task with random 60%/20%/20% splits and summarize the results of 100 runs in Table 2, reporting the average accuracy with a 95% confidence interval. We observe that FE-GNN has almost the best performance on homophilic graphs. In particular, compared to the current SoTA method ADA-UGNN (Ma et al., 2021), which unifies the objectives in both spatial and spectral domains, our FE-GNN achieves a \(1.1\%\) accuracy improvement on average on 5 homophilic graph datasets. We attribute the superior performance of GNN-LF/HF on Citeseer to its complex hyperparameter tuning, where more fine-grained constraints of the parameters can be found, which we consider as future work in section 7. Furthermore, FE-GNN achieves on average \(32.0\%\) improvement on three heterophilic graph datasets than the GCN baseline. It is worth highlighting the huge margin of FE-GNN over others on Chameleon and Squirrel, where they perfectly fit our assumption of the second modification, i.e., heterophilic and limited dimensionality of node attributes. The results of the ablation studies are also consistent with this.
### Ablation Studies
We study the contribution of different components and hyperparameter effects in FE-GNN.
Does the feature flattening work?Yes.In Table 3, we compare our proposal with a weight-sharing (WS) instance where the principal components are retained. It shows that over an increasing number of feature subspaces, the flattening feature subspace is always observed better performance, which verifies the discussion of theorems 3.2 and 4.1.
When do structural principal components work?On limited node attributes and the heterophilic case.We evaluate FE-GNN on 5 datasets of both homophilic and heterophilic graphs in 3 different feature space constructions: including \(w/o\)\(S\), \(w/o\)\(P_{k}(\hat{L})X_{k=0}\), and \(w/o\)\(P_{k}(\hat{L})X_{k>0}\), which denote feature space constructions without graph structure, without note attributes, and without the combination of both. In the ablation results of Table 5, we found that \(w/o\)\(S\) works well on homophilic graphs, but fails on heterophilic ones (with limited node attribute dimension), while the other two work inversely. Meanwhile, \(w/\)\(S\) greatly improves the bounded cases, and these results confirm our original intention of proposing structural principal components. In addition, we provide other variants in Appendix B.3 that include structural information as an extension of our discussion in Section 4.3, and row-wise normalization in Appendix A.8, where our proposal is comparably effective.
What proportion of the truncated SVD is appropriate?94%.We use different ratios of singular vectors and values to construct \(S\), i.e., the top \(j\) singular values get \(r=\sum_{j=1}^{i}V_{jj}/\sum_{j=1}^{n}V_{jj}\) ratio of components. In Figure 6, we show the accuracy of CiteSeer and Chameleon as the ratio of singular values increases, where CiteSeer is robust to variation in the ratio, while Chameleon performs best
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & Cora & CiteSeer & Chameleon & Squirrel \\ \hline k=2 & \(89.15\pm_{0.86}\) & \(81.97\pm_{1.10}\) & \(73.41\pm_{1.40}\) & \(67.37\pm_{0.83}\) \\ k=2 (WS) & \(87.21\pm_{0.83}\) & \(73.89\pm_{0.72}\) & \(72.94\pm_{1.22}\) & \(66.01\pm_{1.31}\) \\ k=4 & \(88.56\pm_{0.81}\) & \(80.19\pm_{0.84}\) & \(73.23\pm_{1.56}\) & \(67.40\pm_{0.90}\) \\ k=4 (WS) & \(87.28\pm_{1.38}\) & \(77.72\pm_{0.86}\) & \(73.23\pm_{1.72}\) & \(66.43\pm_{1.72}\) \\ k=8 & \(88.92\pm_{0.88}\) & \(81.11\pm_{0.89}\) & \(73.85\pm_{1.52}\) & \(67.93\pm_{0.04}\) \\ k=8 (WS) & \(86.92\pm_{1.66}\) & \(77.32\pm_{0.33}\) & \(73.15\pm_{1.83}\) & \(66.63\pm_{2.38}\) \\ k=16 & \(88.26\pm_{1.4}\) & \(80.54\pm_{1.03}\) & \(73.88\pm_{1.53}\) & \(67.82\pm_{1.54}\) \\ k=16 (WS) & \(87.34\pm_{1.98}\) & \(78.60\pm_{0.70}\) & \(72.94\pm_{1.79}\) & \(66.65\pm_{2.15}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of 1) flattening feature subspaces
at \(r=94\%\). So we use 94% for further experiments. We provide more SVD results in the Appendix B.6.
**At what order is a polynomial sufficient? Around three.** We test a progressive order \(K\) of polynomials on Cora and Chameleon as shown in Figure 4. The performance in Cora increases from \(1\) to \(3\) and decreases slightly, while Chameleon shows only small changes. This suggests that order 3 is good enough to achieve near-optimal performance, which is also an advance by flattening the feature subspaces compared to deep GNNs. A more comprehensive comparison can be found in Appendix B.4.
**Is linearity sufficient for constructing features? Yes.** We apply the nonlinear activation function to FE-GNN and some deep learning training techniques, including Dropout (Agarap, 2018) and DropEdge (Rong et al., 2020), to verify the sufficiency of our linear implementation. In Figure 3, we show the corresponding performance with different drop ratios on four datasets, where both show worse performance when increasing the drop rate. Therefore, our linear construction is sufficient for FE-GNN. These results also argue for more attention to feature space construction to avoid over-application of deep artifices.
### Efficiency Check
Finally, we examine the efficiency of our proposal, including the training time cost and the truncated SVD time. In Table 2, we collect the training time per epoch (ms) for each method, which shows that FE-GNN behaves at a comparable time cost to other baselines, even though it bears more computational cost from the independent feature expression. Note that the time we report includes graph propagation for a fair comparison, although FE-GNN can reduce it further by constructing the feature space in a preprocessing manner. In Figure 5 we compare the convergence time for all methods and observe that FE-GNN consumes the minimum number of training epochs while achieving the highest accuracy, which exactly reveals the conclusion of easier optimization progress from Theorem 4.1. And in table 4, we show the training time and SVD time (as preprocessing), from which we find that the SVD time in the total training time is less than 10%, which confirms the applicability.
## 7 Conclusion
In this paper, we provide the feature space view to analyze GNNs, which separates the feature space and the parameters. Together, we provide a theoretical analysis of the existing feature space of GNNs and summarize two issues. We propose 1) feature subspace flattening and 2) structural principal components for these issues. Extensive experimental results verify their superiority.
LimitationsNonlinear cases are not included in our work and will be considered in future work. Also, the correlation between the subspaces should be studied more carefully beyond the linear correlation property; in a sense, the parameters can be further reduced by introducing reasonable constraints. Finally, more feature space construction methods should be discovered for future work.
## Acknowledgements
This work was partly supported by the National Key Research and Development Program of China (No. 2020YFB1708200), the "Graph Neural Network Project" of Ping An Technology (Shenzhen) Co., Ltd. and AMiner.Shenzhen SciBrain fund. This work was also supported in part by the NSF-Convergence Accelerator Track-D award #2134901, by the National Institutes of Health (NIH) under Contract R01HL159805, by grants from Apple Inc., KDDI Research, Quris AI, and IBT, and by generous gifts from Amazon, Microsoft Research, and Salesforce.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Cora & CiteSeer & PhMed & Squirrel & Chameleon \\ \hline FE-GNN(C) & **89.45\(\pm\)0.22** & **81.96\(\pm\)33** & 89.87\(\pm\)0.09 & 67.82\(\pm\)0.25 & **73.33\(\pm\)0.8** \\ FE-GNN(M) & 89.09\(\pm\)0.22 & 81.76\(\pm\)0.13 & 89.93\(\pm\)0.23 & **67.90\(\pm\)0.23** & 73.26\(\pm\)0.38 \\ w/o norm & 86.23\(\pm\)1.06 & 79.23\(\pm\)1.08 & **90.27\(\pm\)0.06** & 40.74\(\pm\)1.06 & 68.25\(\pm\)1.04 \\ w/o \(S\) & 89.20\(\pm\)0.08 & 81.95\(\pm\)0.07 & 89.76\(\pm\)0.06 & 43.21\(\pm\)0.09 & 61.54\(\pm\)1.32 \\ w/o \(P_{\mathrm{L}}(\bar{L})X_{\mathrm{\geq 0}}\) & 71.10\(\pm\)1.74 & 74.38\(\pm\)1.01 & 86.61\(\pm\)0.05 & 67.90\(\pm\)0.06 & 73.35\(\pm\)1.21 \\ w/o \(P_{\mathrm{L}}(\bar{L})X_{\mathrm{\geq 0}}\) & 84.70\(\pm\)1.09 & 58.60\(\pm\)1.29 & 85.84\(\pm\)0.05 & 65.75\(\pm\)0.03 & 72.61\(\pm\)1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Time consumption of SVD
Figure 6: Sensitivity study of truncated SVD |
2303.11968 | Modelling Force-Free Neutron Star Magnetospheres using Physics-Informed
Neural Networks | Using Physics-Informed Neural Networks (PINNs) to solve a specific boundary
value problem is becoming more popular as an alternative to traditional
methods. However, depending on the specific problem, they could be
computationally expensive and potentially less accurate. The functionality of
PINNs for real-world physical problems can significantly improve if they become
more flexible and adaptable. To address this, our work explores the idea of
training a PINN for general boundary conditions and source terms expressed
through a limited number of coefficients, introduced as additional inputs in
the network. Although this process increases the dimensionality and is
computationally costly, using the trained network to evaluate new general
solutions is much faster. Our results indicate that PINN solutions are
relatively accurate, reliable, and well-behaved. We applied this idea to the
astrophysical scenario of the magnetic field evolution in the interior of a
neutron star connected to a force-free magnetosphere. Solving this problem
through a global simulation in the entire domain is expensive due to the
elliptic solver's needs for the exterior solution. The computational cost with
a PINN was more than an order of magnitude lower than the similar case solved
with classical methods. These results pave the way for the future extension to
3D of this (or a similar) problem, where generalised boundary conditions are
very costly to implement. | Jorge F. Urbán, Petros Stefanou, Clara Dehman, José A. Pons | 2023-03-21T15:59:03Z | http://arxiv.org/abs/2303.11968v1 | # Modelling Force-Free Neutron Star Magnetospheres using Physics-Informed Neural Networks
###### Abstract
Using Physics-Informed Neural Networks (PINNs) to solve a specific boundary value problem is becoming more popular as an alternative to traditional methods. However, depending on the specific problem, they could be computationally expensive and potentially less accurate. The functionality of PINNs for real-world physical problems can significantly improve if they become more flexible and adaptable. To address this, our work explores the idea of training a PINN for general boundary conditions and source terms expressed through a limited number of coefficients, introduced as additional inputs in the network. Although this process increases the dimensionality and is computationally costly, using the trained network to evaluate new general solutions is much faster. Our results indicate that PINN solutions are relatively accurate, reliable, and well-behaved. We applied this idea to the astrophysical scenario of the magnetic field evolution in the interior of a neutron star connected to a force-free magnetosphere. Solving this problem through a global simulation in the entire domain is expensive due to the elliptic solver's needs for the exterior solution. The computational cost with a PINN was more than an order of magnitude lower than the similar case solved with classical methods. These results pave the way for the future extension to 3D of this (or a similar) problem, where generalised boundary conditions are very costly to implement.
keywords: magnetic fields; stars: magnetars; stars: neutron; Neural networks; Physics Informed Neural Networks
## 1 Introduction
Deep learning (DL) is a subset of techniques comprehended in Machine Learning that is fundamentally based on multi-layered Neural Networks (NNs). In recent years, DL has been widely used to perform a large variety of tasks. Examples include (among many others): computer vision (to perform image classification) (Traoere et al., 2018), face recognition (Lawrence et al., 1997) or medical diagnosis (Kuganavar and Prabhakar, 2021), speech recognition (Chan et al., 2015) and robotics to emulate human-like walking and running or mobile navigation in pedestrian environments (Hayat and Mall, 2013).
Physics-Informed Neural Networks (PINNs) (Raissi et al., 2019), is a deep learning approach used to numerically approximate the solution of non-linear partial differential equations (PDEs). The original idea was born more than twenty years ago (Lagaris et al., 1997), but the lack of the necessary computational resources made it complicated to put it into practice. In recent years, we account with graph-based automatic differentiation, as well as different frameworks that support computations in CPUs and GPUs, such as Tensorflow or Pytorch, and a dramatic increase in computational power. These factors, combined with a blooming interest in machine learning applications in science, have given birth to this promising new field. PINNs have been used, among many other applications, in fluid dynamics (Cai et al., 2021), nuclear reactor dynamics (Schiassi et al., 2022), radiative transfer (Mishra and Molinaro, 2021; Chen et al., 2022) and black-hole spectroscopy (Luna et al., 2022). PINNs incorporate the underlying physical laws that govern a system (the PDEs) in the loss function and then optimise the NN so that the residual of the PDE is minimal. Unlike more traditional DL approaches in other fields, PINNs do not require large amounts of data -or any data at all- for the training of the NN.
Compared to classical finite-differences/finite-elements methods, PINNs still fall short in terms of efficiency and precision. However, they present some advantages as flexible, multi-purpose PDE solvers. For example, the PINN formulation allows us to solve problems in arbitrary, unstructured meshes without using high-resolution, memory-consuming grids. In addition, once a PINN is trained for a general problem, the calculation of a new solution is swift and consists only of the few operations needed during the forward pass through the network. This is a potential advantage in speed compared to classical methods.
In this work, we assess the applicability of a PINN solver in elliptic problems. In particular, we focus on the problem of modelling force-free (FF) magnetospheres of neutron stars (NS) in the non-rotating, axisymmetric limit. There is a wealth of work in the
literature formulating this problem in terms of the Grad-Shafranov equation (Glampedakis et al., 2014; Pili et al., 2015; Akgun et al., 2016; Kojima, 2017; Akgun et al., 2017; Akgun et al., 2017; Akgun et al., 2018), which gives us the opportunity to make detailed comparisons and draw robust conclusions on the performance and generalisability of the PINN solver.
The paper is organized as follows: in Sec. 2, we give a brief mathematical overview of the physics of NS magnetospheres. In Sec. 3, we describe in detail how the PINN solver is built. We present the solutions acquired by the PINN solver with error estimates in Sec. 4. In Sec. 5, we demonstrate PINN's capabilities through an astrophysical application. Sec. 6 is dedicated to discuss our main results and how to improve and generalise our approach to face more difficult problems in the near future.
## 2 Modelling axisymmetric force-free magnetospheres
In the force-free regime, and considering that the contribution of gravity, inertia, plasma pressure and rotation in the dynamics of a NS magnetosphere is negligible compared to the magnetic force, the force-balance equation reduces to the simple form
\[(\boldsymbol{\nabla}\times\boldsymbol{B})\times\boldsymbol{B}=0, \tag{1}\]
where \(\boldsymbol{B}\) denotes the magnetic field. This regime is better suited for magnetar magnetospheres (Thompson and Duncan, 1995, 1996), because magnetars are slow rotators and the absence of any rotationally induced electric fields is a very good approximation.
In axisymmetry, we can express the magnetic field in terms of a poloidal and a toroidal stream function \(\mathcal{P}\) and \(\mathcal{T}\). We will follow the notation and formalism as in Akgun et al. (2016). We refer the interested reader to that work for a more detailed description. In spherical coordinates \((r,\theta,\phi)\) and in terms of the stream functions the magnetic field reads:
\[\boldsymbol{B}=\boldsymbol{\nabla}\mathcal{P}\times\boldsymbol{\nabla}\phi+ \mathcal{T}\boldsymbol{\nabla}\phi. \tag{2}\]
where \(\boldsymbol{\nabla}\phi=\frac{e_{\phi}}{r\sin\theta}\) with \(\boldsymbol{e}_{\phi}\) being the azimuthal unit vector. Substituting Eq. (2) into (1), the \(\phi\)-component of the equation gives
\[\boldsymbol{\nabla}\mathcal{P}\times\boldsymbol{\nabla}\mathcal{T}=0, \tag{3}\]
which simply states that, \(\mathcal{T}=\mathcal{T}(\mathcal{P})\) must be a function of \(\mathcal{P}\) (or vice-versa). The remaining components, give us the so-called Grad-Shafranov (GS) equation
\[\triangle_{\text{GS}}\mathcal{P}+G(\mathcal{P})=0. \tag{4}\]
Here \(G(\mathcal{P})=\mathcal{T}(\mathcal{P})\frac{d\mathcal{T}}{d\mathcal{P}}\) is the source term accounting for the presence of currents in the magnetosphere and \(\triangle_{\text{GS}}\) is the GS operator
\[\triangle_{\text{GS}}\equiv r^{2}\sin^{2}\theta\ \boldsymbol{\nabla}\cdot \left(\frac{1}{r^{2}\sin^{2}\theta}\boldsymbol{\nabla}\right). \tag{5}\]
For convenience, we will use compactified spherical coordinates (see Stefanou et al. (2023)) \((q,\mu,\phi)\), where \(q=\frac{1}{r}\) and \(\mu=\cos\theta\) instead of the usual \((r,\theta,\phi)\). In this set of coordinates the GS operator reads:
\[\triangle_{\text{GS}}\equiv q^{2}\partial_{q}\left(q^{2}\partial_{q}\right)+ \left(1-\mu^{2}\right)q^{2}\partial_{\mu}\mu. \tag{6}\]
To solve Eq. (4), we must also provide boundary conditions (BCs) and the functional form of the source term, that is, \(\mathcal{T}(\mathcal{P})\). The particular functional form is arbitrary and different choices are possible. In Akgun et al. (2016), they used
\[\mathcal{T}(\mathcal{P})=s\left(\mathcal{P}-\mathcal{P}_{c}\right)^{\sigma} \Theta\left(\mathcal{P}-\mathcal{P}_{c}\right), \tag{7}\]
where \(\Theta\) is the Heaviside function, and \(s\), \(\mathcal{P}_{c}\), and \(\sigma\) are parameters that control the relative strength of the toroidal and poloidal components, the region where the toroidal field is non-zero and the non-linearity of the model. However, this has the limitation that it assumes \(\mathcal{P}>0\). A possible generalisation overcoming this constraint is
\[\mathcal{T}(\mathcal{P})=s\left(|\mathcal{P}|-\mathcal{P}_{c}\right)^{\sigma} \Theta\left(|\mathcal{P}|-\mathcal{P}_{c}\right), \tag{8}\]
which allows for currents even for negative values of \(\mathcal{P}\). We have explored different options but, for simplicity and the purpose of this paper, we will use a quadratic function for the astrophysical application in Sec. 5.2, defined as follows:
\[\mathcal{T}(\mathcal{P})=s_{1}\mathcal{P}+s_{2}\mathcal{P}^{2}. \tag{9}\]
We impose BCs for \(\mathcal{P}\) at the surface of the star (\(q=1\)), at radial infinity (\(q=0\)) and at the axis (\(\mu=\pm 1\)). Regularity and symmetry of the problem lead to
\[\mathcal{P}(q,\mu=\pm 1)=\mathcal{P}(q=0,\mu)=0\.\]
In particular, one advantage of compactifying the radial coordinate (going from \(r\) to \(q\)) is to make it easier to impose BCs at radial infinity: rather than imposing a specific decay rate at large \(r\), we can impose Dirichlet BCs at just one point (\(q=0\)). This reduces unwanted numerical noise from the external boundary.
At the surface, we must provide the function \(\mathcal{P}(\mu)\). Our implementation of BCs in a NN must be as general as possible but keep the number of parameters reasonably low. A reasonable and practical choice is to use some decomposition of the arbitrary function in terms of orthonormal polynomials. Considering the symmetry of our problem and that we are working with functions describing magnetic fields, the natural choice is to express \(\mathcal{P}(q=1)\) in terms of coefficients of a Legendre polynomial expansion. We use the following decomposition:
\[\mathcal{P}(q=1,\mu)=\left(1-\mu^{2}\right)\sum_{l=1}^{l_{\text{max}}}\frac{ bl}{l}P_{l}^{\prime}(\mu), \tag{10}\]
where \(l\) is the order of the multipole (\(l=1\) corresponds to a dipole), \(P_{l}\) are the Legendre polynomials (not to be confused with \(\mathcal{P}\), the poloidal flux function) and the prime denotes differentiation with respect to \(\mu\). Thus, the boundary condition at the surface is completely determined by prescribing the \(b_{l}\) coefficients 1.
Footnote 1: The \(1/l\) normalisation factor and the coefficients \(b_{l}\) in the expansion have been chosen to match the \(b_{l}\) coefficients used in Dehnnan et al. (2023).
## 3 Methodology
### Neural Networks
NNs are universal approximators of mathematical functions (Hornik et al., 1989). They are the result of compositions of simple but non-linear transformations at different layers. The way that these layers are interconnected indicates the NN _architecture_. There are many architectures available in the literature, such as Fully-Connected Neural
Networks (FCNNs), Recurrent Neural Networks (RNNs), Convolutional Neural Networks (CNNs) and more. Each one is particularly suitable for specific tasks (see Alzubaidi et al.2021 for a detailed review). In this paper, we have adopted two different types of architecture: FCNNs and Residual Neural Networks (ResNets). We briefly describe these architectures below.
#### 3.1.1 Fully-Connected Neural Networks
In a FCNN, each layer contains a number of _units_ (also called neurons) that transform the inputs received from the previous layer and then pass the result to the next layer. This transformation is done by two basic steps. A linear combination of the inputs received and an evaluation through a non-linear _activation function_. Mathematically, this process can be expressed for each layer as follows:
\[\mathbf{a}^{j}=g(\mathbf{W}^{j}\mathbf{a}^{j-1}+\mathbf{b}^{j}) \tag{11}\]
where \(\mathbf{a}\) is a vector containing the values of the neurons \(\mathbf{a}\) in layer \(j\). \(\mathbf{W}\) and \(\mathbf{b}\) denote the _weight_ matrix and the _bias_ vector of the layer, and \(g\) is a non-linear function (the activation function). The weights and biases of all the layers constitute the set of trainable parameters of a FCNN, i.e., the parameters that are optimised to obtain the best approximation to the true solution for our problem. Fig. 1 shows a schematic representation of a FCNN.
#### 3.1.2 Residual Neural Networks
As the complexity of the problem grows, an increased number of layers is required in order to capture all the desired features of the solution (7). However, training deeper NNs is not a trivial task. Indeed, it has been demonstrated that increasing the depth leads to _degradation_ of the network's accuracy (Srivastava et al.2015), i.e., the network performs worse.
In order to overcome this problem, He et al. (2015) introduced the concept of ResNets. The idea behind this architecture is based on the phenomenological fact that it is easier to optimise a layer to learn the residual \(\mathcal{F}(\mathbf{x})=\mathcal{H}(\mathbf{x})-\mathbf{x}\) of a function \(\mathcal{H}(\mathbf{x})\) with respect to the identity function, than to learn the function \(\mathcal{H}(\mathbf{x})\) itself. This is implemented by replacing a usual layer with a _residual block_. A residual block is formed if, before the evaluation of the activation function at a certain layer \(j\), we add the output of the \(K_{th}\) previous layer (a _skip connection_). By adding skip connections, the output of a ResNet layer can be expressed as
\[\mathbf{a}^{j}=g\left(\mathbf{W}^{j}\mathbf{a}^{j-1}+\mathbf{b}^{j}+\mathbf{a}^{j-K}\right). \tag{12}\]
### Physics-Informed Neural Networks
The standard way of training a NN accounts with data that consist of values of the true solution in a given discrete set of points in the input domain. However, this approach demands a large number of training examples to build a reliable relationship between the inputs and the outputs. In particular, for astrophysical systems, this method would rely on data obtained through a large set of observations. Training the network means to optimise its parameters so that the difference between the NN prediction \(u(\mathbf{x})\) and the "exact" solution \(\tilde{u}(\mathbf{x})\) is minimised. This is done through the use of a _loss function_, which is usually some suitable norm of the quantity \(|\mathbf{u}(\mathbf{x})-\tilde{u}(\mathbf{x})|\).
The key novelty in PINNs is the incorporation of information about the physical laws into the training process. This can be accomplished by minimizing the residual of the PDEs that govern the system, instead of the difference between prediction and real data/exact solution. Using data in the loss function is optional and sometimes may facilitate the optimization, but in practise is not necessary. Consider a general PDE of the form
\[\mathcal{L}u(\mathbf{x})-G(\mathbf{x},u(\mathbf{x})) =0, \tag{13}\] \[u|_{\partial\mathcal{D}} =f_{b}(\mathbf{x}), \tag{14}\]
where \(\mathcal{L}\) is a general non-linear differential operator, \(G\) is a source term and \(\mathbf{x}\) is a vector of coordinates in some domain \(\mathcal{D}\). The PDE is subject to some BCs \(f_{b}\) at the boundary \(\partial\mathcal{D}\) of the domain. If \(u\) is an approximation to an exact solution given by a PINN, then Eq. (13) will have a residual, that is, the right-hand side will not be exactly zero. The smaller this residual is, the closer \(u\) be to an exact solution. Thus, the loss function is precisely a suitable norm of the residual \(|\mathcal{L}u(\mathbf{x})-G(\mathbf{x},u(\mathbf{x}))|\) of the PDE.
The function \(u\) should also satisfy the BCs (14). There are different approaches to implement them. The most commonly used is to add a term in the loss function consisting of a norm of the residual of Eq. (14), so that the residuals of both Eq. (13) and (14) are minimised simultaneously. We believe that this is not the optimal way to impose BCs, because the relative contribution of the two terms can differ significantly, thus hindering the training process. Ideas to surpass this problem include adjusting hyperparameters (such as the number of boundary points considered and the relative weight of the two terms) or calculating the _neural tangent kernel_ of the network (Wang et al.2022), among others. We find that, overall, the increased complexity and the need for fine tuning additional hyperparameters in this method is an inconvenience. Instead we opt for an approach inspired by Lagaris et al. (1997) (see also a similar one, based on distance functions in Sukumar and Srivastava (2022)). In that approach, the approximate solution \(u\) is formulated as follows:
\[u(\mathbf{x})=f_{b}(\mathbf{x})+h_{b}(\mathbf{x})\mathbf{N}(\mathbf{x}), \tag{15}\]
where \(f_{b}\) is a smooth and (at least) twice differentiable function that satisfies the BCs (see Eq. (14)), \(h_{b}\) is a smooth and twice differentiable function that defines the boundary (\(h_{b}=0\) at \(\partial\mathcal{D}\)), and \(\mathbf{N}\) is the output of the network. By parametrizing the approximate solution in such a way, we ensure that the BCs are satisfied always by construction. The output of the network is not directly the approximate
Figure 1: A schematic representation of a deep fully-connected neural network.
solution \(u\) to our problem, but some function \(\mathcal{N}\) that, when inserted in Eq. (15), gives us an approximation that satisfies exactly the BCs.
In this work, we attempt to generalise the PINN approach to build a PDE solver valid for different and varied BCs (and possibly, source terms). We want our network to learn how to approximate any particular solution for a given operator \(\mathcal{L}\). This means that the information about the BCs should be part of the _input_ of the network (along with the coordinates) and not hardcoded in the loss function or in the parametrization (15). During training, the network needs to process a large number of points \(x\) and a large number of \(f_{b}\) functions so that it can generalise and provide solutions of (13) for any point in the domain and any BC. Of course, \(f_{b}\) could, in principle, be an arbitrary continuous function. For this reason, it should be intelligently encoded into the network's input to keep the number of parameters small and manageable.
In the following section, we present tests for our PINN solver for the Grad-Shafranov equation as described in Sec. 2.
## 4 Test and Models
### Current-free Grad-Shafranov equation
As a first test, we consider the Grad-Shafranov equation (Eq. (4)) without current, that is \(G(\mathcal{P})=0\). We set up the various elements of the PINN solver as follows: The function \(h_{b}\) describing the boundary is given by
\[h_{b}(q,\mu)=q(1-q)(1-\mu^{2}). \tag{16}\]
The function \(f_{b}\) that satisfies the BCs, where \(h_{b}=0\), is given by
\[f_{b}\left(q,\mu\right)=q^{n}\left(1-\mu^{2}\right)\sum_{l=1}^{l_{\text{max}} }\frac{b_{l}}{l}P_{l}^{\prime}(\mu). \tag{17}\]
Notice that Eq. (17) differs from Eq. (10) by a factor \(q^{n}\) with \(n>0\). We include this factor to enforce BCs both at the surface (\(q=1\)) and infinity (\(q=0\)). If \(n=1\), this parametrization is the same as that used in Lagaris et al. 1997 considering essential BCs in a rectangle. However, we prefer to leave \(n\) as a free parameter that is used to give more or less weight to the solution close to the star or away from the surface. We performed a detailed study of the influence of the hyperparameters of the model, including \(n\), in the following section. We must remark that other parametrizations are possible and in principle can be tuned to improve the results of each specific problem.
The input layer of the NN consists of the coordinates of the point where the solution will be evaluated \(\left(q,\mu\right)\) and the coefficients \(b_{l}\) determining the BC at \(q=1\). For the physical applications in this paper, we expect the dipole (\(l=1\)) component of the magnetic field to be dominant. Therefore, we normalise all other multipole coefficients by dividing them by \(b_{1}\) and we reduce the range of the possible values of \(b_{l>1}\) between -1 and 1. With this choice, we can omit the \(b_{1}\) coefficient because it is reabsorbed in the normalisation factor of the magnetic field strength.
Hereafter we limit ourselves to \(l_{\text{max}}=7\), which suffices for our purposes and is a good compromise between generality of solutions and ease of training. Increasing the number of multipoles adds complexity and it would require larger networks to achieve the desired accuracy. Thus, one training point is defined as
\[\left(q,\mu,b_{2},b_{3},b_{4},b_{5},b_{6},b_{7}\right),\]
In each forward pass providing \(\mathcal{N}\left(q,\mu,\{b_{l}\}\right)\), the solution of the differential equation at each training point \(i\) is given by
\[\mathcal{P}_{i}=\left(1-\mu^{2}\right)\left[\sum_{l=1}^{l_{\text{max}}}\frac{ b_{l}}{l}q^{n}P_{l}^{\prime}(\mu)+q\left(1-q\right)\mathcal{N}\left(q,\mu,\{b_{l} \}\right)\right]. \tag{18}\]
Notice that the output of the network \(\mathcal{N}\) depends both on the coordinates and on the BCs. This is the crux of our approach, as we want the network to be able to generalise for any BC (expressed in terms of the \(b_{l}\) coefficients).
The weights and biases of the network are optimised by minimising the loss function averaged over a large sample of training points \(i_{\text{max}}\)
\[\mathcal{J}=\frac{1}{i_{\text{max}}}\sum_{i=1}^{i_{\text{max}}}\left[\,\Delta_ {\text{GS}}\mathcal{P}_{i}\right]^{2}, \tag{19}\]
using the ADAM optimiser (Kingma & Ba, 2014) with an exponential learning rate decay. The derivatives in the Grad-Shafranov
Figure 2: (a) Evolution of the loss function with the training epochs. The periodic spikes correspond to renewals of the training set. (b) Colormap of the relative error (percentage) between \(\mathcal{P}\) and \(\mathcal{P}_{\text{cr}}\). Yellow dashed lines correspond to contours of \(\mathcal{P}\) while black solid lines correspond to \(\mathcal{P}_{\text{cr}}\). The multipole coefficients in \(f_{b}\) for this particular example are \(b_{1}=1\), \(b_{l\geq 2}=(-1)^{s+1}\,0.6\).
operator \(\Delta_{\text{GS}}\) are calculated with the automatic differentiation tools already implemented in the machine learning framework that we use (TensorFlow Abadi et al. (2016)). We consider training sets of size \(l_{\text{max}}=10^{4}\). At first sight, this number might look large. However, we must stress that it is required by the high dimensionality of our problem. Our parameter space consists of two coordinates plus six coefficients to describe the BC (fixing \(b_{1}=1\)). Covering this 8-dimensional parameter space with only \(3\) points in each dimension would need \(3^{8}=6561\) points, which makes evident the crucial difference between training to solve a PDE with fixed BCs or training with arbitrary BCs (formally, an infinite number of additional parameters). The training set is changed periodically every few thousand epochs in order to feed the network with as many points as possible.
Fig. 2a shows the evolution of the loss function (19) with the number of training epochs. For this particular model, we have chosen a FCNN architecture with 4 hidden layers and 80 neurons at each layer with \(n=5\) in \(f_{b}\) (see the next section for details on the choice of these hyperparameters). The periodic spikes correspond to renewals of the training set. They can be understood as a measure of the ability of a model to generalise to new, unseen points. Fig. 2b shows an example of the final result once the NN has been trained. The black solid lines show the _exact_ analytical solution which is uniquely determined by the \(b_{1}\) coefficients
\[\mathcal{P}_{\text{ex}}(q,\mu)=\left(1-\mu^{2}\right)\sum_{l=1}^{l_{\text{ ens}}}\frac{b_{1}}{l}q^{l}P_{l}^{r}(\mu), \tag{20}\]
while the yellow dashed lines show the solution acquired by the PINN (\(\mathcal{P}\)). They are indistinguishable at the figure scale. The colourmap indicates the relative difference between \(\mathcal{P}\) and \(\mathcal{P}_{\text{ex}}\), at most \(\sim 0.5\%\) for this particular model.
The magnetic field components can be computed from \(\mathcal{P}\) via automatic differentiation. From Eq. (2) we have
\[B_{r} =-q^{2}\frac{\partial\mathcal{P}}{\partial\mu}, \tag{21}\] \[B_{\theta} =\frac{q^{3}}{\sqrt{1-\mu^{2}}}\frac{\partial\mathcal{P}}{ \partial q}. \tag{22}\]
In finite difference schemes, one usually losses accuracy when taking numerical derivatives. To explore the performance of the PINN in this respect, we have computed different relative error norms of different orders (\(p\)) for \(\mathcal{P}\), \(B_{r}\), \(B_{\theta}\) and for the magnetic field modulus \(B=\sqrt{B_{r}^{2}+B_{\theta}^{2}}\). Results are summarised in Tab. 1. These \(p\)-norms for a given order \(p\) are calculated for every variable as in Eivazi et al. (2022)
\[E_{u}=\frac{\|u-u_{\text{ex}}\|_{p}}{\|u_{\text{ex}}\|_{p}}\times 100, \tag{23}\]
where \(u\) is the result of each variable returned by the PINN and \(u_{\text{ex}}\) is the corresponding exact solution. Interestingly, the errors are of the same order of magnitude for the function \(\mathcal{P}\) and its derivatives. This can be attributed to the fact that we train the PINN with a second-order PDE (the loss function involves second-order derivatives) and this includes additional information on the derivatives. Furthermore, using automatic differentiation is also an advantage over finite difference schemes, where accuracy depends on the resolution.
### Influence of the PINN hyperparameters
We have performed a detailed study to measure the influence of various hyperparameters of our model. In particular, we have considered the following:
* Changes of the parametrization of the boundary (\(q^{n}\) power).
* Number of neurons at each layer.
* Number of hidden layers.
* Resnet vs. FC architectures.
We changed one hyperparameter at a time while keeping the rest fixed to the reference values of the previous section. The results of this study are presented separately in the following subsections.
#### 4.2.1 Changes of the parametrization of the boundary conditions
We begin by considering different values of the exponent in the \(q^{n}\) term in the boundary function (17). Fig. 3 shows the evolution of the loss with the training epochs for three different values of the exponent \(n\), namely 1 (corresponding to the Lagaris parametrization), 3 and 5. As \(n\) increases, the impact of the surface BC becomes less important. In general, increasing \(n\) improves the convergence of the model and leads to more accurate solutions. Tab. 2 shows the relative error \(L_{2}\)-norms for the four quantities that we use to evaluate our results (\(\mathcal{P}\), \(B_{r}\), \(B_{\theta}\), \(B\)). All of them decrease with increasing \(n\).
with smaller \(N\) do not have enough free parameters to account for the complexity and variability of the solutions. Our results show that the loss reaches values that are two orders of magnitude smaller when doubling \(N\) from 20 to 40 and another order of magnitude when doubling from 40 to 80. This is reflected, as well, in Tab. 3, where all quantities show a significant improvement in accuracy as \(N\) increases.
#### 4.2.3 Number of hidden layers
Following the same line of arguments, one could expect that increasing the number of hidden layers \(L\) would also lead to improved convergence and higher accuracy. However, our results show that adding more layers has a marginal impact on convergence and accuracy, or it can even lead to worse results for large networks. In other words, deeper networks are more prone to overfitting. Evidence of overfitting can be seen in Fig. 5 for \(L=5\). The spikes that correspond to renewals of the set of training points are much higher than expected, even at the later stages of training. This is, indeed, reflected in Tab. 4, where the model with \(L=4\) performs worse in terms of accuracy than the model with \(L=4\) because it is overfitted to the training set and fails to generalise to unseen points. Considering, in addition, that training deeper networks is computationally expensive, we conclude that increasing too much the number of layers is not beneficial in terms of accuracy or convergence.
#### 4.2.4 Resnet vs Fully Connected
Lastly, we consider two different types of NN architectures, FC architecture and ResNet architecture. Results are summarised in Fig. 6 and Tab. 5. No appreciable differences can be detected between the two models in convergence or overall accuracy.
\(G(\mathcal{P})\) which accounts for the presence of currents and is given by
\[\mathcal{J}=\frac{1}{i_{\rm max}}\sum_{i=1}^{i_{\rm max}}\big{[}\mathcal{L}_{\rm GS }\mathcal{P}_{i}-G(\mathcal{P}_{i})\big{]}^{2}. \tag{24}\]
The input must include information about the functional form of \(G(\mathcal{P})\) or equivalently \(\mathcal{T}(\mathcal{P})\). In Sec. 2 we modelled \(\mathcal{T}\) to be a quadratic function of \(\mathcal{P}\) using two parameters, \(s_{1}\) and \(s_{2}\) (see Eq. (9)). Therefore, the input of the PINN must be extended to include these parameters. The PINN is trained to provide solutions for any value of \(s_{1}\) and \(s_{2}\) in the same sense that it is trained to provide solutions for any value of the multipole coefficients defining the BC. The input for the general force-free case is
\[(q,\mu,b_{2},b_{3},b_{4},b_{5},b_{6},b_{7},s_{1},s_{2}).\]
We note that there can be regions in the input parameter space where mathematical solutions do not exist (see Akgin et al. (2018); Mahlmann et al. (2019) for a detailed discussion). Nevertheless, the PINN will return approximate solutions. It is up to the user to carefully evaluate the validity and accuracy of the results. Fig. 7 shows, for reference, a comparison between a current-free and a force-free magnetic field, where we observe notorious difference in the structure of the field lines.
The lack of analytical solutions in the general case makes it difficult to estimate errors, which is a fundamental part of any scientific analysis. Despite the abundant literature on PINNs as PDE solvers in recent years, a systematic and consistent way for measuring errors and deciding on the quality of the provided approximate solutions is still lacking. Here, we adopt the following approach:
* After our NN is trained, we create a regular grid \((q_{i},\mu_{j})\), where \(i,j=0,...,N_{0}\), and evaluate the PINN solution \(\mathcal{P}\) at all the points with a forward pass.
* Then, we discretise Eq. (4) using a second-order finite difference scheme, which gives us a different evaluation of the residual \(\epsilon_{\rm FD}\) using the same function values evaluated in the previous step.
* If \(\mathcal{P}\) was the "exact" solution of the PDE, the second-order residual \(\epsilon_{\rm FD}\) would decrease with increasing resolution as \(\sim N_{0}^{-2}\), where \(N_{0}\) is the number of grid points (assuming both dimensions have the same resolution). In reality, \(\mathcal{P}\) is only an approximate solution with an intrinsic error \(\epsilon_{\rm NN}\) inherited from the quality and accuracy of the PINN. Therefore, \(\epsilon_{\rm FD}\) will follow this power law only up to the point where the PINN approximation error \(\epsilon_{\rm NN}\) starts to dominate the discretisation error \(\epsilon_{\rm FD}\).
Fig. 8 illustrates this behaviour. We plot the \(L_{2}\)-norm of the discretised GS equation for both the current-free (\(G(\mathcal{P})=0\)) and force-free cases (\(G(\mathcal{P})\neq 0\)) as a function of the number of grid points \(N_{0}\). In both cases the \(L_{2}\)-norm drops as \(\sim N_{0}^{-2}\) until it reaches a plateau which signals that \(\epsilon_{\rm NN}>\epsilon_{\rm FD}\). We expect that, at worst, \(\epsilon_{\rm NN}\) will be of the order of the square root of the loss function, because in Eqs. (19), (24) \(\mathcal{J}\) is precisely \(L_{2}^{2}\). In other words, when calculating \(\epsilon_{\rm NN}\) for a particular example, with fixed multipole coefficients and source terms, we expect the error to be of the order \(\sqrt{\mathcal{J}}\), within a factor of a few. We note that we obtain errors of the same order of magnitude for both cases, with a factor \(\sim 5\) less for the vacuum. We expect a slightly higher error when we introduce the current term \(G(\mathcal{P})\), because we increase the dimensionality of the problem, and we also introduce non-linear terms into the differential equation.
## 5 Application to the magnetothermal evolution of neutron stars
Our astrophysical scenario of interest is the long-term evolution of magnetic fields in NSs. The evolution of the system is governed by two coupled equations: the heat diffusion equation and the induction equation (see the review by Pons and Vigano (2019) for more details). They must be complemented with a detailed specification of the local microphysics (neutrino emissivity, heat capacity, thermal and electrical conductivity) and the structure of the star, usually assumed as fixed throughout the NS's life. Our NS background model
Figure 8: The \(L_{2}\)-norm of the discretised Grad–Shafranov equation for the current-free (red) and force-free (black) cases as a function of the resolution \(N_{0}\).
Figure 7: Field lines for the current-free (red) and force-free (black) cases. The multipole coefficients at the surface are \(b_{t>1}=0.5\) for both. For the force free case, the coefficients in expression (9) for \(\mathcal{T}(\mathcal{P})\) are \(s_{1}=0.2\), \(s=0.4\).
is a \(1.4M_{\odot}\) NS built with the Sly42 equation of state (Douchin and Haensel, 2001). We use the 2D magneto-thermal code (latest version in Vigano et al. (2021)) developed by our group suitably modified to implement the external BCs using the PINN (trained as described in the previous section) to assess its performance and potential. In particular, this implementation allows us to quickly switch and compare between vacuum BCs and the barely explored force-free BCs. To our knowledge, only the work by Akgun et al. (2018) has presented results from simulations that included the effect of a magnetosphere threaded by currents. They had to implement a costly elliptic solver as a BC which slowed down the code considerably. The PINN implementation should, in principle, be much easier to change, efficient, and generalisable.
Footnote 2: [https://compose.obspm.fr/](https://compose.obspm.fr/)
### Current-free magnetospheric boundary conditions
We begin by considering a crustal-confined magnetic field topology and vacuum BCs (no electrical currents circulating in the envelope and across the surface). We enforce BCs via multipole expansion of the radial magnetic field at the surface as described in Pons et al. (2009); Pons and Vigano (2019).
The coefficients of the multipole expansion can be computed from the radial component of the magnetic field at the surface of the star as follows:
\[b_{I}=\frac{2l+1}{2(l+1)}\int_{0}^{\pi}B_{r}(R,\theta)P_{I}(\cos\theta)\sin \theta d\theta. \tag{25}\]
During the evolution, we calculate at each time step the \(b_{I}\) coefficients using Eq. (25). In the classical approach, we reconstruct the values of \(B_{r}\) and \(B_{\theta}\) in the external ghost cells, explicitly:
\[B_{r} =\sum_{I=1}^{L_{\rm max}}b_{I}\left(l+1\right)P_{I}\left(\cos \theta\right)\left(\frac{R}{r}\right)^{l+2}, \tag{26}\] \[B_{\theta} =-\sin\theta\sum_{I=1}^{L_{\rm max}}b_{I}P_{I}^{\prime}\left( \cos\theta\right)\left(\frac{R}{r}\right)^{l+2}, \tag{27}\]
where \(R\) is the radius of the NS. For conciseness, we refer to this procedure by the nomenclature OLD.
In the new PINNs approach, we use the \(b_{I}\) coefficients obtained from the Legendre decomposition as inputs to the PINN. The latter returns values of the poloidal flux function \(\mathcal{P}\), or any required component of the magnetic field by taking derivatives (Eqs. (21), (22)). Obviously, in this case (vacuum BCs), the PINN approach does not represent any advantage because we already know how to build the analytical solution. However, we want to ensure that the results of the simulations do not show any undesirable effects before moving on to a more complex case.
To compare the different employed techniques described above, we run axisymmetric crustal-confined magnetic field simulations using the 2D magneto-thermal code (Vigano et al., 2021) with a grid of 99 angular points (from pole to pole) and 200 radial points. The initial field has a poloidal component of \(10^{14}\) G (value at the pole and consists of a sum of a dipole (\(b_{1}=1\)), a quadrupole (\(b_{2}=0.6\)) and an octupole (\(b_{3}=0.3\)). The initial toroidal quadrupolar component has also a maximum initial value of \(10^{14}\) G. The maximum number of multipoles is fixed to \(l_{max}=7\) for PINN and to \(l_{max}=50\) for OLD. The objective behind the use of different \(l_{max}\) is to assess the impact of truncating the multipole number when using the PINN.
The results of the comparison at \(t=10\) kyr are displayed in Fig. 9. On the left (right), we show the magnetic field profiles obtained with OLD (PINN) BCs. The overall evolution of the magnetic field and the electric current is very similar. Slight differences appear due to the multipolar truncation in the PINN case. We must note that, if the same maximum number of multipoles is set for both systems, we obtain almost identical results.
### Force-free magnetospheric boundary conditions
To couple the internal field evolution with a force-free magnetosphere PINN-solver, we must extend the vacuum case (Sec. 5.1) with additional steps. In our magneto-thermal evolution code, we impose the external BCs by providing the values of the magnetic field components in two radial ghost cells for every angular cell. In the vacuum case, once the multipolar decomposition of the radial field over the surface is known, the solution in the ghost cells can be built analytically. However, in the general case, one must solve
Figure 9: A snapshot of the magnetic field evolution and the electric current at 10 kyr, obtained using OLD (left panel) and PINN (right panel). In the left hemisphere, we show the meridional projection of the magnetic field lines (white lines) and the toroidal field (colors). In the right hemishphere, we display the square of the modulus of the electric current, i.e., \(|\,\mathcal{J}\,|^{2}\) (note the log scale). The crust has been enlarged by a factor of 8 for visualization purposes.
an elliptic equation in a different grid which must be extended far from the surface to properly capture to asymptotic behaviour at long distances. This process is very costly because it must be repeated tens of thousands of time steps as the interior field evolves. In this situation, having trained the PINN, allows us to use it as a fast tool to provide required values of the solution in the ghost cells. We proceed as follows: First, at each evolution time step, we must know the toroidal function \(\mathcal{T}(\mathcal{P})\). For simplicity, in this application we use a quadratic function. At each time step we fit the values obtained from the internal evolution one cell below the surface. The fit provides the coefficients of the quadratic interpolation \(s_{1}\) and \(s_{2}\) defined in Eq. (9). Next, as described in Sec. 4.3, \(s_{1}\) and \(s_{2}\) are provided as additional input parameters to the forward pass. The PINN returns the poloidal flux function \(\mathcal{P}\) and the components of the magnetic field needed at the ghost cells of the magneto-thermal evolution code. With this information, the internal evolution can proceed to the next time step.
We assume an initial force-free magnetic field with a poloidal component of \(3\times 10^{14}\) G at the polar surface and a maximum toroidal field of \(3\times 10^{14}\) G. To understand the impact of the different BCs, we consider in one case force-free BCs (left panel of Fig. 10) and in the other case vacuum BCs (right panel of Fig. 10). The results of the comparison are illustrated by two snapshots at \(t=80\) kyr of the evolution with the same initial model. We note that the initial force-free magnetic field allows current sheets to thread the star's surface. A distinct magnetic field evolution is clearly observed if we apply one type of BCs or the other. The force-free BCs (left) result in a stronger toroidal dipole close to the surface and slightly displaced towards the north. The stronger toroidal component compresses the poloidal field lines closer to the poles. In contrast, for vacuum BCs, the poloidal field lines retain certain symmetry with respect to the equator, and the dominant toroidal component is now quadrupolar and concentrated at the crust/core interface, as shown in the right panel of Fig. 10. The distribution of the electric current in the stellar crust is also different. Enforced by the vacuum BCs, current tends to vanish around the poles and close to the surface. This is similar to what was observed in Fig. 9 although the initial field topology is different. Instead, with force-free BCs, currents near the surface are not forced to vanish. In the left panel, the slightly more yellowish region in the northern hemisphere and mid-latitudes indicates that significant current flows into the magnetosphere. This difference in current configurations would have important implications in the observed temperature distribution, as discussed in Akgin et al. (2018). We will address, in future works, a more detailed exploration of the effect of BCs since our purpose here is to illustrate with a few examples the potential of our approach.
## 6 Conclusions
Using PINNs to obtain a solution of a particular boundary value problem is, up to date, far more computationally expensive and arguably less accurate than using classical methods. The drawbacks are related to the training process, which involves the minimization of a high-dimensional loss function. Once a PINN is trained for a given boundary value problem, its utility is limited because it would be necessary to re-train to generate new solutions with different BCs.
The functionality of PINNs to real physical problems would become significantly better if their flexibility and adaptability can be increased. In this work, we explore this idea by training our PINN for general BCs and source terms, expressed through appropriate coefficients (a limited number of them) that enter as additional inputs in the network. Of course this makes the training process computationally more expensive, but the evaluation of new generic solutions is very fast. In our study, the coverage of the parameter space is not exhaustive because we have used very limited computational resources (a personal computer), and our purpose is to show that this proof-of-concept works and can already be applied to some physical problems, even by non-experts in computer science. If necessary, it is straightforward to adapt our implementation for the specific needs of other applications (for example, adding more multipoles in the boundary if we need to capture smaller scales, or extending the parametrization of the BCs). We are aware that it is possible to drastically improve the efficiency of our computations by employing GPU clusters or enriching our algorithms with advanced machine learning techniques and that will be required, for example, in the extension to 3D of this work.
We have also explored various configurations for our network through a basic hyperparameter space study. We conclude that the most impactful element is the number of neurons per layer. On the contrary, making the network deeper by adding more layers is not beneficial, beyond a reasonable minimum. Furthermore, the way that the solution is parametrized to always satisfy the BCs is important, indicating that the human orientation in some choices (as opposed to a zero-like approach) is still critical for physics problems. We also found that ResNet architectures do not offer any advantage for the kind of applications that we are dealing with. As is often the case in research related to NN, all the above conclusions are empirical and should be taken with a grain of salt because it is hard to find a rigorous theoretical foundation to support them.
Our results show that the PINN solutions are relatively accurate, reliable, and well-behaved. For the current-free Grad-Shafranov equation, where comparisons with an analytical solution can be made, we found relative differences of typically less than 1%. For the force-free Grad-Shafranov equation, we propose a method for estimating the error through the use of a finite difference discretisation scheme. Our analysis shows that solutions are accurate up to a point that we associate to the intrinsic approximation error of the PINN. This approach is straightforward to implement and self-consistent and could be set as the standard procedure for estimating errors of PINN-based PDE solvers in general. Even if PINNs fall short in terms of precision compared to classical PDE solvers, they can still be used in conjunction with them in various cases. For example, many iterative solvers rely on good initial guesses to converge. A PINN can provide such an initial guess to be subsequently refined by a classical method.
The most interesting case where PINNs overcome the capabilities of classical methods are physical systems with two or more domains that are governed by vastly different physical conditions and time-scales. An example of such a case is the magnetothermal evolution in the interior of a NS that is connected to a force-free magnetosphere. Solving this problem through a global simulation in the entire domain is very costly due to the needs of the elliptic solver for the exterior solution. On the contrary, PINNs provide a very effective way of imposing BCs at the interface of the two domains. Once a PINN is trained, accurate enough magnetospheric solutions in a few points (ghost cells of the interior evolution code) can be swiftly computed at each time step without adding an excessive amount of computational cost. We have shown that for the well-tested case of vacuum BC, the PINN is accurate enough to satisfactorily reproduce the reference results obtained by the exact spectral decomposition approach. Nevertheless, it is about 2 times slower. As proof-of-concept, to demonstrate the potentiality of our approach, we also presented results for the less explored force-free magnetospheric BCs. In this case, the computational cost was more than an order of magnitude smaller than the similar problem solved with classical methods. Indeed, we obtain solutions that allow currents that thread the NS's
surface and flow into the magnetosphere, giving rise to a new family of internal field configurations. We reserve a more detailed analysis of the physical properties of these solutions in 2D, and the more physically relevant extension to 3D, for future works.
## Acknowledgements
We acknowledge the support through the grant PID2021-127495NB-I00 funded by MCIN/AEI/10.13039/501100011033 and by the European Union, and the Astrophysics and High Energy Physics programme of the Generalitat Valenciana ASFAE/2022/026 funded by MCIN and the European Union NextGenerationEU (PRTRC-C17.11). IFU is supported by the predoctoral fellowship UAFU21-103 funded by the University of Alicante. CD is supported by the ERC Consolidator Grant "MAGNESIA" No. 817661 (P.I. N. Rea) and this work has been carried out within the framework of the doctoral program in Physics of the Universitat Autonoma de Barcelona and it is partially supported by the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M.
## Data Availability
All data produced in this work will be shared on reasonable request to the corresponding author.
|
2305.12889 | Accelerated Bayesian inference of plasma profiles with self-consistent
MHD equilibria at W7-X via neural networks | High-$\langle \beta \rangle$ operations require a fast and robust inference
of plasma parameters with a self-consistent MHD equilibrium. Precalculated MHD
equilibria are usually employed at W7-X due to the high computational cost. To
address this, we couple a physics-regularized NN model that approximates the
ideal-MHD equilibrium with the Bayesian modeling framework Minerva. We show the
fast and robust inference of plasma profiles (electron temperature and density)
with a self-consistent MHD equilibrium approximated by the NN model. We
investigate the robustness of the inference across diverse synthetic W7-X
plasma scenarios. The inferred plasma parameters and their uncertainties are
compatible with the parameters inferred using the VMEC, and the inference time
is reduced by more than two orders of magnitude. This work suggests that MHD
self-consistent inferences of plasma parameters can be performed between shots. | Andrea Merlo, Andrea Pavone, Daniel Böckenhoff, Ekkehard Pasch, Golo Fuchert, Kai Jakob Brunner, Kian Rahbarnia, Jonathan Schilling, Udo Höfel, Sehyun Kwak, Jakob Svensson, Thomas Sunn Pedersen, the W7-X team | 2023-05-22T10:15:34Z | http://arxiv.org/abs/2305.12889v1 | Accelerated Bayesian inference of plasma profiles with self-consistent MHD equilibria at W7-X via neural networks
###### Abstract
High-\(\beta\) operations require a fast and robust inference of plasma parameters with a self-consistent magnetohydrodynamic (MHD) equilibrium. Precalculated MHD equilibria are usually employed at Wendelstein 7-X (W7-X) due to the high computational cost. To address this, we couple a physics-regularized artificial neural network (NN) model that approximates the ideal-MHD equilibrium with the Bayesian modeling framework Minerva. We show the fast and robust inference of plasma profiles (electron temperature and density) with a self-consistent MHD equilibrium approximated by the NN model. We investigate the robustness of the inference across diverse synthetic W7-X plasma scenarios. The inferred plasma parameters and their uncertainties are compatible with the parameters inferred using the variational moments equilibrium code (VMEC), and the inference time is reduced by more than two orders of magnitude. This work suggests that MHD self-consistent inferences of plasma parameters can be performed between shots.
## 1 Introduction
Bayesian inference is the statistical identification of model parameters that are consistent with given observed data using Bayes theorem. In a magnetic confinement experiment, the model parameters describe the plasma state, which often includes, but is not limited to the plasma profiles (e. g., temperature and density) and an ideal-magnetohydrodynamic (MHD) equilibrium. Diagnostics raw measurements provide the observed data.
In case of a high \(\beta\) (the ratio between the plasma kinetic to the magnetic pressure) scenario or a large internal plasma current, the equilibrium may differ from the vacuum equilibrium. To correctly infer the plasma state, a self-consistent, finite-beta ideal-MHD equilibrium is required. Wendelstein 7-X (W7-X) is an optimized stellarator that features a stiff equilibrium and a low bootstrap current [1, 2] (the expected bootstrap current at high-\(\beta\) in the standard configuration is \(\simeq 80\,\)kA [3]), therefore, equilibrium changes caused by the plasma internal current are predicted to be small with respects to an equivalent tokamak. Finite-beta effects, however, have been experimentally observed, and they can still modify the vacuum equilibrium [4].
In tokamaks, data analysis workflows regularly include the reconstruction of a self-consistent ideal-MHD equilibrium [5, 6, 7]. However, this is not the case with stellarators. The inference of 3D ideal-MHD equilibria have been already demonstrated in several non-axisymmetric experiments, namely the Compact
Toroidal Hybrid (CTH) [8; 9], the National Compact Stellarator Experiment (NCSX) [10], the Helically Symmetric Experiment (HSX) [11; 12; 13], Wendelstein 7-AS (W7-AS) [14], and W7-X [4; 15; 16]. Moreover, multiple frameworks exist that allow the reconstruction of 3D ideal-MHD equilibria: V3FIT [17; 18], STELLOPT [19], and Minerva [20; 52]. However, because of the computational cost of calculating 3D ideal-MHD equilibria, equilibrium reconstruction is not routinely used in Bayesian inference procedures and data analysis pipelines. Vacuum fields, or precalculated finite-beta approximations are used instead [16].
Data-driven approaches have been proposed to reduce the computational cost of constructing stellarator equilibria. Function parametrization was used at W7-AS [14; 21] and at W7-X [22; 23; 24]. More recently, [25] proposed a physics-regularized artificial neural network (NN) to approximate the ideal-MHD solution operator in W7-X configurations. Adopting machine learning (ML) models to approximate experimental equilibria can drastically accelerate sample-intensive applications (e. g., Bayesian inference) that require the computation of a MHD equilibrium.
However, how does the adoption of MHD equilibria approximated by NN models affect the inferred plasma parameters? In this paper, we show the fast (\(\mathcal{O}(10^{2}\,\mathrm{s})\)) and robust inference of plasma profiles (electron temperature and density) based on the Thomson scattering (TS) [26; 27] and single channel dispersion interferometer (DI) [28] diagnostics with a self-consistent MHD equilibrium. The self-consistent MHD equilibrium is approximated by the physics-regularized NN model proposed in [25]. The NN model allows the efficient exploration of the full posterior distribution of plasma profiles with self-consistent MHD equilibria, yielding an estimation of the profile uncertainties. Specifically, we:
* couple the physics-regularized NN model from [25] in the Bayesian framework Minerva (section 2);
* investigate the robustness of the inferred plasma parameters across diverse W7-X synthetic plasma scenarios (sections 3.1 and 3.2);
* compare the reconstructed experimental plasma parameters and their uncertainties (estimated with Markov chain Monte Carlo (MCMC)) with three different MHD equilibrium models (section 3.3): a fixed, finite-beta variational moments equilibrium code (VMEC) equilibrium; a self-consistent VMEC equilibrium; and a self-consistent NN equilibrium;
## 2 Methods
### Bayesian Analysis
In a model-based simulation, a mechanistic model, the model parameters \(H\), and observed data \(D\) characterize the phenomenon under study. In Bayesian analysis, the model parameters and observations are fully captured by their joint probability distribution:
\[P(H,D)=P(D|H)P(H)\, \tag{1}\]
where the _prior_ distribution \(P(H)\) describes the prior knowledge on the model parameters before taking any observations into account (e. g., plasma density should be positive), and the conditional distribution \(P(D|H)\) denotes the probability of the observed data \(D\) given the model parameters \(H\) (when seen as a function of \(H\) for given \(D\), \(P(D|H)\) is it also known as the _likelihood_ function).
A parametric distribution function (e. g., Gaussian) usually represents \(P(D|H)\). The likelihood function is a function that captures the physical model under consideration, and it is often referred to as the _forward_ model. Experimentally derived uncertainties inform the distribution variance.
Bayesian _inference_ updates the model parameters prior distribution to their posterior distribution given the observed data using Bayes formula:
\[P(H|D)=\frac{P(H,D)}{P(D)}=\frac{P(D|H)P(H)}{P(D)}\, \tag{2}\]
where the _posterior_ distribution \(P(H|D)\) describes the probability of the model parameters given the observed data, and \(P(D)\) is the _evidence_, a normalization constant that describes the probability of the observed data given all possible values of the model parameters:
\[P(D)=\int P(D|H)P(H)dH. \tag{3}\]
The posterior distribution of the model parameters describes the degree of belief we have in the parameter values, which is based on prior knowledge and updated by the observed data. It encapsulates all the information and uncertainties we have on the model parameters.
### Model Graph
The Bayesian modeling framework Minerva [20, 52] is used in this work. In Minerva, a graph describes the relationship between model parameters and observed data, which are both represented as the graph's nodes. The graph's edges represent the probabilistic relationships between the modeled quantities. At W7-X, several diagnostics are already implemented in Minerva, making it the obvious choice to validate the accelerated inference of plasma profiles with the NN model.
Figure 1 shows a simplified picture of the Minerva graph used in this work. Blue denotes the model free parameters, and orange denotes the observed quantities. A MHD equilibrium maps the 1D electron temperature and density profiles to a 3D position of magnetic flux surfaces in space. The TS [26, 27] and the single channel DI [28] diagnostics are used to constrain the plasma state: the TS provides information on the electron density and temperature, while the DI measures the line-integrated electron density along a single line of sight (LOS).
#### 2.2.1 Prior distributions and forward models
Modified _two power_ profiles parametrize the electron density and temperature:
\[n_{e}(\rho) =h_{n_{e},0}(1-(\rho/h_{n_{e},3})^{h_{n_{e},1}+2})^{h_{n_{e},2}}, \tag{4}\] \[T_{e}(\rho) =h_{T_{e},0}(1-(\rho/h_{T_{e},3})^{h_{T_{e},1}+2})^{h_{T_{e},2}}, \tag{5}\]
where \(\rho=\sqrt{\Phi/\Phi_{\rm edge}}\) is the square root of the normalized toroidal flux. The additional (with respect the standard two power parametrization) parameters \(h_{T_{e},3}\) and \(h_{n_{e},3}\) allow modeling not-null temperatures
Figure 1: A simplified sketch of the Minerva graph. Blue ellipses represent free parameters, orange ellipses represent observed quantities, white ellipses represent model fixed parameters or model internal quantities, and white boxes represent physical forward models.
and densities beyond the last closed flux surface (LCFS), which are usually observed at W7-X [29]. Uniform distributions represent the prior on the plasma profile parameters (see section 7.1).
Flux surfaces \(\rho(x,y,z)\) computed from a free-boundary ideal-MHD equilibrium map the 1D profiles (e. g., \(n_{e}(\rho)\)) to the 3D real space geometry (\(n_{e}(x,y,z)\)). The set of W7-X coil currents fixed to their setpoint values, the net toroidal magnetic flux \(\Phi_{\rm edge}\) represented by a uniform prior distribution, and the 1D pressure and toroidal current profiles parametrize the free-boundary ideal-MHD equilibrium.
In this work, we assume \(T_{i}\simeq T_{e}\), \(Z_{\rm eff}\simeq 1\), and \(n_{i}\simeq n_{e}=n\). As a result, the pressure is simply given by \(p\simeq 2n_{e}T_{e}\). The purpose of this paper is to investigate the use of NN equilibria in a Bayesian inference framework, rather than providing a highly accurate description of W7-X (e. g., by including a X-ray imaging crystal spectroscopy (XICS) to infer the ion temperature, or spectrometers to infer the plasma effective ion charge \(Z_{\rm eff}\)). Therefore, the relaxation of these assumptions is not in the scope of this paper.
The NN model sees only the total pressure \(p\) and not \(T_{e}\) or \(T_{i}\). As a result, for the scope of this work, it is not critical how the pressure profile is constructed. The assumptions only limit the set of experimental scenarios that can be used to test the inference procedure. Indeed, the experimental scenarios considered in section 3.3 feature \(T_{i}\simeq T_{e}\) within experimental uncertainties. However, in section 3.2, we use synthetic data to investigate the robustness of the inference procedure across diverse, randomly sampled (within the model parameter prior distributions) W7-X scenarios.
The toroidal current profile is parametrized as:
\[I_{\rm tor}(s)=\frac{2}{\pi}h_{I_{\rm tor},1}{\rm atan}(\frac{h_{I_{\rm tor}, 2}s^{h_{I_{\rm tor},3}}}{(1-s)^{h_{I_{\rm tor},4}}}), \tag{6}\]
where \(s=\rho^{2}\). However, given the lack of diagnostics to constrain its profile [24], the profile shape is fixed: \(h_{I_{\rm tor},2}=h_{I_{\rm tor},3}=h_{I_{\rm tor},4}=1\). Moreover, given that \(I_{\rm tor}(s=1)=h_{I_{\rm tor},1}\), \(h_{I_{\rm tor},1}\) is held fixed to the net toroidal current as measured by the Rogowski coil [30].
This work considers two MHD equilibrium models: VMEC [31] and a physics-regularized NN [25]. The NN model has been integrated into Minerva with the ONNX Runtime framework [32]. In the proposed Minerva graph, the equilibrium model is employed only to provide the self-consistent flux surface mapping. In this regard, [25] reports that the NN model achieves an average flux surface error of \(\simeq 0.6\,\)mm. The NN model was trained on a large set of W7-X magnetic configurations, including the reference W7-X configurations [33], pressure profiles with \(\langle\beta\rangle\) up to \(5\,\%\), and toroidal current profiles with \(I_{\rm tor}\) up to \(20\,\)kA. W7-X experimental conditions in previous operational campaigns are well within the training data set [**Rolf2019**, 34]. For further information on the model architecture, training, and evaluation, please refer to [25].
The TS observations constrain the electron density and temperature. The TS system's forward model maps \(n_{e}(x,y,z)\) and \(T_{e}(x,y,z)\) to the number of scattered photons on a set of _volumes_ along the TS laser beam path. Polychromator filters are used to measure the scattered spectrum by selecting distinct spectral intervals. The time integral of the signal of each channel over the length of the scattered laser pulse corresponds to the number of the scattered photons [27]. Thomson scattering scatters laser photons off of electrons, and the Doppler effect broadens the Thomson scattered spectra due to the thermal motions of the electrons. The electron temperature determines the width of the Thomson scattered spectrum, and electron density determines its amplitude. The predicted TS signal of the \(i\)-th channel is [27]:
\[D^{i}_{\rm TS}=C_{\rm TS}g_{0}E_{\rm laser}r_{e}^{2}n_{e}\int\frac{\lambda}{ \lambda_{\rm laser}}\frac{g_{i}(\lambda)}{g_{0}}S(\lambda,\theta,T_{\rm e})d \lambda\, \tag{7}\]
where the integration is over the scattered spectrum wavelength \(\lambda\). \(g_{0}\) is the absolute sensitivity of a reference channel, \(E_{\rm laser}\) is the laser pulse energy, \(r_{e}\) is the electron radius, \(\lambda_{\rm laser}\) is the laser wavelength, \(g_{i}(\lambda)\) is the absolute sensitivity of the \(i\)-th channel, \(\theta\) is the scattering angle, and \(S(\lambda,\theta,T_{\rm e})\) is the spectral density function. The \(\frac{g_{i}(\lambda)}{g_{0}}\) factor represents the relative calibration of each channel, and the \(C_{\rm TS}\) is the global calibration factor. For a more detailed description of the TS system, please refer to [26, 27].
The forward model of the DI system maps the 3D electron density to the line-integrated value along the DI LOS. The predicted DI signal is:
\[D_{\rm DI}=\int n_{e}(x,y,z)dl, \tag{8}\]
where the integration path \(\int dl\) is along the DI LOS. For a more detailed description of the DI system, please refer to [28].
The misalignment of the laser system is a common issue that affects TS systems. In a TS system, observation optics with LOSs that cross the laser beam path capture the scattered light. A misaligned laser system can cause the laser beam path to be outside the nominal scattering volumes; in such a case, the inferred density will be lower than the actual plasma density.
At W7-X, the TS system is believed to be affected by a systematic shift of the laser beam [35, 36]. Laser misalignment is expected to be the dominant source of error in the electron density profiles, affecting both the overall scale and shape of the profile.
In this work, we introduce two additional parameters to account for laser misalignment errors: the global TS calibration factor \(C_{\mathrm{TS}}\), and a systematic uncertainty \(\sigma_{\mathrm{laser}}\) that is added to the nominal TS uncertainty. \(C_{\mathrm{TS}}\) is a model free parameter, and it is constrained during inference by the line-integrated density of the DI (i. e., the TS is cross-calibrated during inference). The systematic uncertainty due to laser misalignment is held fixed, and the standard deviation of the \(i\)-th TS channel is modeled as:
\[(\sigma_{\mathrm{TS}}^{i})^{2}=(\hat{\sigma}_{\mathrm{TS}}^{i})^ {2}+(\sigma_{\mathrm{laser}}^{i})^{2}, \tag{9}\] \[\sigma_{\mathrm{laser}}^{i}=\max(\sigma_{0},\alpha D_{\mathrm{ TS}}^{i})\, \tag{10}\]
where \(\hat{\sigma}_{\mathrm{TS}}^{i}\) is the nominal TS uncertainty. \(\alpha\) and \(\sigma_{0}\) are experimentally estimated (see section 7.2). Ongoing efforts at W7-X are in place to account for the TS laser misalignment. A comprehensive modeling and exhaustive analysis of this error is beyond the scope of this paper.
### Inference
The inference procedure is divided in two stages. Firstly, in the so called maximum a posteriori (MAP) optimization, the model free parameters that maximizes the left-hand side of equation (2), i. e., the mode of the posterior distribution, are derived:
\[H_{\mathrm{MAP}}=\operatorname*{arg\,max}_{H}P(H|D). \tag{11}\]
The Hooke and Jeeves optimizer [37] drives the optimization. Secondly, the MCMC Metropolis-Hastings algorithm (MHA) [38, 39] numerically approximates the full posterior distribution. The model parameter uncertainties can be derived from the MCMC samples. The inference is carried out on a single core of an Intel(R) Xeon(R) Gold 6136 CPU @ 3 GHz.
We are interested in generating samples from the posterior distribution of the model parameters because they allow propagating their uncertainties into any subsequent quantities. The computation of the samples is not straightforward: either a closed form of the posterior is derived (either analytically, possible only in very seldom cases, or by approximating it with a parametric form as it is done in variational inference), or a sampling algorithm, such as an MCMC method, is used. MCMC methods have desirable mathematical convergence properties, and they do not require assumptions on the parametric form of the posterior distribution. However, they do not scale well to problems where the computation of the likelihood is inefficient because they require several thousands of evaluations of the likelihood function. Replacing the VMEC code with the NN model results in orders of magnitude faster likelihood evaluations, therefore, it brings the Bayesian inference approach to a scale that was not possible before.
## 3 Results
This section investigates the inference of plasma profiles for both synthetic and experimental W7-X data: section 3.1 visualizes the simple reconstruction of two parameters on simulated diagnostic data; section 3.2 investigates the robustness of the inference procedure across diverse W7-X plasma scenarios; section 3.3 compares the reconstruction of plasma profiles and their uncertainties with three different MHD equilibrium models: a fixed, finite-beta VMEC equilibrium; a self-consistent VMEC equilibrium; and a self-consistent NN equilibrium.
### \(2\)D inference on synthetic data
To introduce the inference procedure, and to visualize the free parameter values during inference, a simplified 2D inference on synthetic data is reported. Apart for the electron density scale factor \(h_{n_{e},0}\) and the total toroidal flux enclosed by the plasma \(\Phi_{\rm edge}\), all other model parameters are fixed.
In this section, diagnostic data are simulated with this Minerva graph rather than taken from the experiment. Observations are modeled using normal or truncated normal distributions. The predicted data from the forward model determine the distribution means, and the experimentally informed uncertainties determine the distribution standard deviations. The synthetic diagnostic data are then generated by setting the free parameter values to their target one (see paragraph below), running the forward model with VMEC as the equilibrium solver, setting the observation means to the predicted diagnostic data, and finally drawing samples from the observation distributions. We would like to make two important remarks: the generated synthetic data are obtained using an accurate equilibrium provided by VMEC, while the inference process uses equilibria obtained from the NN model; the synthetic data are sampled from distributions whose sigma is informed by experimental uncertainties, ensuring that they are not noise-free.
A W7-X plasma scenario resembling the post-pellet phase with an enhanced confinement [4] is considered (_W7X20181016.037_ at \(t=1.7\,\)s): central density \(n_{e}(\rho=0)=8.0\times 10^{19}\,\)m\({}^{-3}\), density peaking factor \(n_{e}(\rho=0)/n_{e}(\rho=0.8)\simeq 2.77\), central temperature \(T_{e}(\rho=0)=3.0\,\)keV, flat temperature profile till \(\rho\simeq 0.2\), net toroidal current \(I_{\rm tor}=1.3\,\)kA, and \(\Phi_{\rm edge}=1.945\,\)V s as from the vacuum equilibrium. For simplicity, \(h_{n_{e},3}=h_{T_{e},3}=1.0\). The resulting volume average plasma beta is \(\langle\beta\rangle=1.31\,\)%.
In this simplified 2D inference, the MAP optimization quickly converges to the true values (figure 2): after five iterations, the inferred values (black dots) are already close to the target ones (red star). The inference initial guess (black star) is sampled from the parameter prior distributions. The relative difference between the inferred and true values is less than \(1\,\)% (table 1).
Figure 2 also shows the contours of the full posterior distribution normalized to the posterior distribution of the MAP values in the \((h_{n_{e},0},\Phi_{\rm edge})\) space: a dark color indicates that a region of parameter space is less likely than the MAP values, on the other hand, a light color indicates that a region of parameter space is equally likely as the MAP values. \(10\,000\) random samples are used to interpolate the posterior distribution. Figure 2 does also highlight the advantage of a fast MHD equilibrium model: sampling \(10\,000\) parameter values, which requires computing \(10\,000\) self-consistent MHD equilibria, took only about \(20\,\)min (in a serial fashion on a single core). Hence, it is now possible to quickly explore the entire posterior distribution.
### Inference robustness on synthetic data
Is the NN model's approximation of the ideal-MHD equilibrium robust across diverse plasma scenarios? The NN model was trained on a large set of W7-X equilibria for various vacuum magnetic configurations, plasma pressure and toroidal current profiles. In this section, we investigate if the inference procedure is robust across plasma scenarios (e. g., high- and low-\(\beta\), flat or peaked pressure profiles).
Using synthetic data provides the advantage of being able to conveniently scan the parameter space in different trials: for each inference trial, the _true_ state and the _initial guess_ are sampled from the model parameter prior distributions. The plasma state is represented by the vector \(\vec{h}=(h_{T_{e},0},h_{T_{e},1},h_{T_{e},2},h_{n_{e},0},h_{n_{e},1},h_{n_{e},2})^{T}\), while \(h_{T_{e},3}=h_{n_{e},3}=1\) are fixed. Similarly to section 3.1, synthetic data are generated by injecting the true plasma state into the graph, and by setting the observed data to samples drawn from the observation distributions, using VMEC as equilibrium model. The free parameters are then initialized to their initial guesses, and the posterior distribution is maximized for \(100\) iterations. The NN model approximates the MHD equilibrium during inference.
The inference procedure is robust across plasma state and initial guess (figures 2(a) to 2(f)). Figures 2(a) to 2(f) show the trajectory of the free parameters (normalized to their true values) as a function of the
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Parameter & true & initial & inferred & difference [\%] \\ \hline \(h_{n_{e},0}\) & 8.00 & 10.30 & 7.98 & 0.19 \\ \(\Phi_{\rm edge}\) & -1.95 & -1.88 & -1.94 & 0.04 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Reconstructed free parameter values for the 2D inference on synthetic data. See section 2.2.1 for the description of each parameter.
MAP iterations. Because the free parameters are normalized to their true values, a final value of 1 (red dashed line) represents a successfully converged inference. The trajectory means (solid line) and \(5\,\%-95\,\%\) quantiles (dashed lines) are estimated with 25 independent inference trials. The profile scale factors (\(h_{n_{e},0}\) and \(h_{T_{e},0}\)) are accurately inferred, whereas the distributions of the profile shape parameters show a non-finite spread in their estimated values. Yet, the inferred values are on average close to the target values. As a representative case, table 2 shows the true, initial, and inferred parameter values in case of the median trial (i. e., the trial that has the median average percentage inference error across all trials).
### Inference of improved confinement scenarios at W7-X
In this section, we replace the synthetic diagnostic data with experimental data. We address two questions: how does the approximation of the equilibria by the NN model affect the inferred plasma parameters? What are the implications of a self-consistent MHD equilibrium for plasma parameter estimation?
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Parameter & true & initial & inferred & difference [\%] \\ \hline \(h_{T_{e},0}\) & 3.87 & 4.37 & 3.87 & 0.20 \\ \(h_{T_{e},1}\) & 2.70 & 2.17 & 2.87 & 6.44 \\ \(h_{T_{e},2}\) & 3.19 & 0.20 & 3.38 & 5.93 \\ \(h_{n_{e},0}\) & 13.92 & 12.20 & 13.92 & 0.03 \\ \(h_{n_{e},1}\) & 4.93 & 2.95 & 4.96 & 0.62 \\ \(h_{n_{e},2}\) & 6.73 & 2.90 & 6.72 & 0.22 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Target, initial guess, and inferred parameter values for the median trial. See section 2.2.1 for the description of each parameter.
Figure 2: Trajectory of the free parameters during inference. In this case, the free parameters are the electron density scale factor \(h_{n_{e},0}\) and the net toroidal flux enclosed by the plasma \(\Phi_{\rm edge}\). The reconstruction process starts from the black star, which represents the initial guess, and converges to the red star, which represents the true state. The dashed black line indicates the order of parameter values encountered during inference. The synthetic observations are generated using the Minerva graph and VMEC as equilibrium model, while the inference is performed using the NN as equilibrium model. The logarithm of the ratio of the sample posterior probability to the MAP posterior probability is also shown. The light dark dots represent \(10\,000\) random samples used to interpolate the posterior distribution.
To address both questions, two plasma scenarios with \(T_{\rm i}\simeq T_{\rm e}\) at different \(\langle\beta\rangle\) and magnetic configurations are considered:
_W7X20181016.37_**, \(t=1.7\)s:**: a \(\langle\beta\rangle>1\,\%\), improved confinement scenario following pellet injections in the _standard_ configuration [4].
**W7X20180080.5**: \(t=20.0\)s:**: a low-power, improved confinement scenario in the _high-iota_ configuration [40].
In this section, we compare three different ideal-MHD equilibrium models: a reference, fixed, finite-beta VMEC equilibrium (see description below); a self-consistent VMEC equilibrium; and a self-consistent NN equilibrium. Comparing the two self-consistent equilibrium models allows the investigation of the approximations provided by the NN model, while comparing the fixed and self-consistent equilibrium models enables the analysis of the implications of a self-consistent equilibrium on the inferred plasma profiles.
Figure 3: Convergence of the model parameters during inference for 25 independent trials. Each trial yields a randomly sampled set of true and initial guess values. Each figure shows the mean (solid line) and \(5\,\%-95\,\%\) quantiles (dashed lines) of the normalized (to their true value) free parameters as a function of the MAP optimization iterations. A red dashed line guides the eye to indicate the normalized target value (i. e., 1). See section 2.2.1 for the description of each parameter.
The reference VMEC equilibrium is taken from the set of precalculated finite-beta equilibria commonly used at W7-X [16]. The pressure profile is assumed to be \(p_{0}(1-s)^{a_{2}}\), where \(p_{0}\) is the pressure scale, and \(a_{2}\) determines the pressure peaking factor. Estimating \(a_{2}\) from the experimental profiles, and assuming \(p_{0}\) so that \(W_{\rm kin}\) matches the measured \(W_{\rm dia}\) value (with a plasma volume of \(V_{p}=30\,\)m\({}^{3}\)), the closest precalculated equilibrium in terms of Euclidean distance in the \((p_{0},a_{2})\) space is selected. The net toroidal current is less than 2 kA in these experiments and can be neglected. \(\Phi_{\rm edge}\) is set to the toroidal flux enclosed by the LCFS of the vacuum field obtained from field line tracing.
In case of the self-consistent MHD equilibria, the equilibrium is not fixed, but evaluated consistently with the proposed density and temperature profiles. Nine free parameters represent the plasma state: four profile parameters each for \(n_{e}\) and \(T_{\rm e}\), and a global calibration factor for the TS (see section 2.2.1). \(\Phi_{\rm edge}\) is held fixed. As in section 3.2, the toroidal current profile shape is fixed, but the net toroidal current is set to the value measured by the Rogowski coil.
To improve the inference's convergence, diagnostics raw data inform the initial guess: \(h_{n_{e},0}\) is set to be consistent with the line average integrated density from the DI (assuming a flat density profile and a LOS length of 1.1 m), and \(h_{T_{e},0}\) is chosen so that the plasma's kinetic energy matches the measured diamagnetic energy from the diamagnetic loop (assuming a plasma volume of \(V_{p}=30\,\)m\({}^{3}\), and a linear pressure profile in normalized toroidal flux).
For both considered experiments, the inferred profile parameters with the three MHD equilibrium models are compatible (tables 3 and 4): after 100 MAP iterations, the parameter values are within one standard deviation away from each other. The standard deviations are estimated using 30 000 MCMC samples, of which the first 10 000 are discarded, and only 1 in every 50 are kept to reduce the sample autocorrelation.
It is worth highlighting the time needed to perform each inference (100 MAP iterations on a single core, given here in case of the _W7X20181016.37_ shot): using the self-consistent VMEC equilibrium took 1108 min and 45 s, using the fixed VMEC equilibrium took 9 min and 35 s, and using the NN model took only 3 min and 31 s. The inference time is reduced by more than two orders of magnitude.
As depicted in tables 3 and 4, the three ideal-MHD models yield compatible plasma profiles. This is true not only in terms of MAP profiles, but also for their posterior distributions estimated with MCMC (figures 4a to 4d).
The inferred profiles are also consistent with a independently inferred electron temperatures and densities per TS volume (figures 4a to 4d). For the reference Minerva inference, the density and temperature values at each TS volume are independently reconstructed using the fixed, finite-beta VMEC equilibrium as MHD model, and the density is further scaled to match the line-integrated value from the DI. See [27, Section 5] for a description of the inference procedure.
Tables 5 and 6 show selected quantities of interest of the resulting equilibria. In case of the inference performed with the NN model, the mean and standard deviation (in brackets) of the MCMC samples are reported.
\begin{table}
\begin{tabular}{l r r r} \hline Parameter & Fixed VMEC & Self-consistent VMEC & Self-consistent NN \\ \hline \(h_{T_{e},0}\) & 3.39 (0.16) & 3.403 & 3.40 (0.17) \\ \(h_{T_{e},1}\) & 1.04 (0.36) & 1.038 & 1.03 (0.38) \\ \(h_{T_{e},2}\) & 4.7 (1.9) & 4.834 & 4.7 (1.9) \\ \(h_{T_{e},3}\) & 1.129 (0.081) & 1.138 & 1.130 (0.080) \\ \(h_{n_{e},0}\) & 8.27 (0.51) & 8.341 & 8.35 (0.50) \\ \(h_{n_{e},1}\) & 0.16 (0.32) & 0.140 & 0.14 (0.33) \\ \(h_{n_{e},2}\) & 2.77 (0.74) & 2.726 & 2.71 (0.89) \\ \(h_{n_{e},3}\) & 1.330 (0.072) & 1.330 & 1.328 (0.079) \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of the inferred free parameters in case of the _W7X20181016.37_ shot at \(t=1.7\) s with the three ideal-MHD models: a fixed, finite-beta precalculated VMEC equilibrium; a self-consistent VMEC equilibrium; and the NN model described in section 2.2.1. The free parameter values are obtained via a MAP optimization, and the standard deviation (in brackets) is estimated with MCMC. Because of the computational time required to run VMEC for each MCMC sample, the MCMC procedure has not been performed in case of the self-consistent VMEC equilibrium.
Figure 4: Comparison of the inferred electron density (figures 4a and 4c) and temperature (figures 4b and 4d) profiles with the following MHD models: a fixed, finite-beta precalculated VMEC equilibrium; a self-consistent VMEC equilibrium; and the self-consistent NN model described in section 2.2.1. The solid lines represent the mean profile across all MCMC samples (in case of the self-consistent VMEC equilibrium, the solid line is the MAP profile), and dashed lines represent the \(5\,\%-95\,\%\) quantiles. In addition, the values (as well as their standard deviation) of the electron density and temperature at each TS volume location, from a reference Minerva inference, are also shown (red stars). The shot id and time of each reconstructed slice are shown in each figure title.
The inferred MHD equilibria obtained using the self-consistent VMEC and NN models feature compatible properties (tables 5 and 6): the inferred values with the self-consistent NN model are within one standard deviation with respect to the values inferred with the self-consistent VMEC equilibrium. It is worth noting that, for the two experimental scenarios investigated, the assumed pressure profile in the precalculated fixed equilibrium (first column in the tables) differs from the inferred pressure profiles (second and third columns in the tables).
\begin{table}
\begin{tabular}{l c c c} \hline Quantity & Fixed VMEC & Self-consistent VMEC & Self-consistent NN \\ \hline \(\langle\beta\rangle\) [\%] & 0.253 & 0.273 & 0.268 (0.034) \\ \(\mathfrak{e}_{\rm axis}\) & 1.012 & 1.003 & 1.00253 (0.00026) \\ \(\mathfrak{e}_{\rm LCFS}\) & 1.193 & 1.172 & 1.17385 (0.00012) \\ \(W_{\rm kin}\) [kJ] & 218 & 236 & 228 (29) \\ \(R_{\rm axis}(\varphi=0)\) [m] & 5.976 & 5.977 & 5.97641 (0.00092) \\ \(p_{0}\) [kPa] & 20.0 & 26.3 & 25.3 (3.2) \\ pressure peaking factor & 3.47 & 4.22 & 4.13 (0.31) \\ \hline \end{tabular}
\end{table}
Table 6: Comparison of selected quantities of the inferred equilibria of experiment _W7X20180808.5_ at \(t=20.0\) s with the three MHD models, analogous to table 5. \(W_{\rm dia}=232.188\) kJ.
\begin{table}
\begin{tabular}{l c c c} \hline Parameter & Fixed VMEC & Self-consistent VMEC & Self-consistent NN \\ \hline \(h_{T_{e},0}\) & 1.930 (0.089) & 1.928 & 1.915 (0.088) \\ \(h_{T_{e},1}\) & 0.001 (0.323) & 0.001 & 0.002 (0.344) \\ \(h_{T_{e},2}\) & 4.1 (1.4) & 4.126 & 4.4 (1.4) \\ \(h_{T_{e},3}\) & 1.330 (0.087) & 1.330 & 1.330 (0.086) \\ \(h_{n_{e},0}\) & 4.32 (0.49) & 4.258 & 4.21 (0.50) \\ \(h_{n_{e},1}\) & 2.52 (0.65) & 2.524 & 2.35 (0.68) \\ \(h_{n_{e},2}\) & 8.2 (1.9) & 8.535 & 6.4 (1.9) \\ \(h_{n_{e},3}\) & 1.079 (0.075) & 1.090 & 1.051 (0.072) \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of the inferred free parameters in case of the _W7X20180808.5_ shot at \(t=20.0\) s with the three ideal-MHD models. See table 3 for a description of each row.
\begin{table}
\begin{tabular}{l c c c} \hline Quantity & Fixed VMEC & Self-consistent VMEC & Self-consistent NN \\ \hline \(\langle\beta\rangle\) [\%] & 1.176 & 1.067 & 1.083 (0.064) \\ \(\mathfrak{e}_{\rm axis}\) & 0.862 & 0.857 & 0.8597 (0.0012) \\ \(\mathfrak{e}_{\rm LCFS}\) & 0.971 & 0.974 & 0.97414 (0.00014) \\ \(W_{\rm kin}\) [kJ] & 1198 & 1089 & 1090 (64) \\ \(R_{\rm axis}(\varphi=0)\) [m] & 5.983 & 5.987 & 5.9876 (0.0028) \\ \(p_{0}\) [kPa] & 80.0 & 91.1 & 90.6 (6.7) \\ pressure peaking factor & 2.98 & 3.74 & 3.65 (0.19) \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of selected quantities of the inferred equilibria of experiment _W7X20181016.37_ at \(t=1.7\) s with three MHD models: a fixed, finite-beta precalculated VMEC equilibrium; a self-consistent VMEC equilibrium; and the self-consistent NN model as described in section 2.2.1. In case of the inference performed with the self-consistent NN model, the mean and standard deviation (in brackets) of the MCMC samples are reported. In these experimental conditions, the measured diamagnetic energy is \(W_{\rm dia}=1137.661\) kJ.
## 4 Conclusions
In this work, we found that employing the NN model from [25] as ideal-MHD equilibrium model in the Minerva Bayesian framework yields inferred plasma parameters compatible with the parameters obtained using VMEC. We have also investigated the robustness of the NN accelerated inference using synthetic data, and we observed that it is robust across diverse W7-X plasma scenarios.
The inference time can be reduced by more than two orders of magnitude when using the NN model. Moreover, given the fast computational time, plasma parameter posterior distributions can be approximated using MCMC samples with self-consistent ideal-MHD equilibria.
This work has two major limitations: no magnetic diagnostics (except for the Rogowski coil) are included in the Minerva graph, owing to the lack of accuracy of the NN model in faithfully computing the plasma current density1, which is required to predict the magnetic diagnostics observations (the plasma contribution to the magnetic field at a location outside the plasma volume depends on a volume integral of the plasma current density [41]); \(T_{i}\sim T_{\mathrm{e}}\) and \(Z_{\mathrm{eff}}\simeq 1\) have been assumed, as a result, the approach can be tested only on a limited set of experimental conditions where these assumptions are close to being satisfied. To infer additional plasma parameters, more diagnostics can be included in the Minerva graph. A XICS can be used to constrain the ion temperature [42], and a spectrometer can be used to infer the effective charge \(Z_{\mathrm{eff}}\)[43].
Footnote 1: the plasma current density depends up to the second-order derivatives of the equilibrium solution, and the equilibrium solution provided by the NN model does not have smooth second-order radial derivatives [25]
Using NN ideal-MHD models, it is possible to quickly infer plasma parameters while taking into account finite-beta effects in the equilibrium. If diagnostic data are available, the MHD self-consistent inference of plasma parameters can theoretically be performed between shots, providing valuable insights for the conduction of operational campaigns.
## 5 Author Statement
The contributions to this paper are described using the CRediT taxonomy [44]:
**Andrea Merlo**: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Software, Visualization, Writing - original draft, Writing - review & editing.
**Andrea Pavone**: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Software, Writing - review & editing.
**Daniel Bockenhoff**: Methodology, Software, Supervision, Validation, Writing - review & editing.
**Ekkehard Pasch**: Data Curation.
**Kai Jakob Brunner**: Data Curation.
**Kian Rahbarnia**: Data Curation.
**Jonathan Schilling**: Software.
**Udo Hofel**: Software.
**Sehyun Kwak**: Software.
**Jakob Svensson**: Software.
**Thomas Sunn Pedersen**: Funding acquisition, Supervision.
## 6 Acknowledgement
We are indebted to the communities behind the multiple open-source software packages on which this work depends: hydra [45], matplotlib [46], numpy [47], pymc3 [48], pytorch [49], pytorch lightning [50], scipy [51].
Financial support by the European Social Fund (ID: ESF/14-BM-A55-0007/19) and the Ministry of Education, Science and Culture of Mecklenburg-Vorpommern, Germany via the project "NEISS" is gratefully acknowledged. This work has been carried out within the framework of the EUROfusion
Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them.
|
2310.16695 | From Pointwise to Powerhouse: Initialising Neural Networks with
Generative Models | Traditional initialisation methods, e.g. He and Xavier, have been effective
in avoiding the problem of vanishing or exploding gradients in neural networks.
However, they only use simple pointwise distributions, which model
one-dimensional variables. Moreover, they ignore most information about the
architecture and disregard past training experiences. These limitations can be
overcome by employing generative models for initialisation. In this paper, we
introduce two groups of new initialisation methods. First, we locally
initialise weight groups by employing variational autoencoders. Secondly, we
globally initialise full weight sets by employing graph hypernetworks. We
thoroughly evaluate the impact of the employed generative models on
state-of-the-art neural networks in terms of accuracy, convergence speed and
ensembling. Our results show that global initialisations result in higher
accuracy and faster initial convergence speed. However, the implementation
through graph hypernetworks leads to diminished ensemble performance on out of
distribution data. To counteract, we propose a modification called noise graph
hypernetwork, which encourages diversity in the produced ensemble members.
Furthermore, our approach might be able to transfer learned knowledge to
different image distributions. Our work provides insights into the potential,
the trade-offs and possible modifications of these new initialisation methods. | Christian Harder, Moritz Fuchs, Yuri Tolkach, Anirban Mukhopadhyay | 2023-10-25T15:06:32Z | http://arxiv.org/abs/2310.16695v1 | # From Pointwise to Powerhouse: Initialising Neural Networks with Generative Models
###### Abstract
Traditional initialisation methods, e.g. He and Xavier, have been effective in avoiding the problem of vanishing or exploding gradients in neural networks. However, they only use simple pointwise distributions, which model one-dimensional variables. Moreover, they ignore most information about the architecture and disregard past training experiences. These limitations can be overcome by employing generative models for initialisation.
In this paper, we introduce two groups of new initialisation methods. First, we _locally_ initialise weight groups by employing variational autoencoders. Secondly, we _globally_ initialise full weight sets by employing graph hypernetworks. We thoroughly evaluate the impact of the employed generative models on state-of-the-art neural networks in terms of accuracy, convergence speed and ensembling. Our results show that _global_ initialisations result in higher accuracy and faster initial convergence speed. However, the implementation through graph hypernetworks leads to diminished ensemble performance on out of distribution data. To counteract, we propose a modification called _noise graph hypernetwork_, which encourages diversity in the produced ensemble members. Furthermore, our approach might be able to transfer learned knowledge to different image distributions. Our work provides insights into the potential, the trade-offs and possible modifications of these new initialisation methods.
## 1 Introduction
Neural networks have shown remarkable success in various computer vision tasks, such as image classification [5, 15, 35, 47], segmentation [4, 18, 41], and detection [10, 34]. Their performance depends on several factors, including their architecture, quality of data and available computing resources. Among these factors, the **choice of weight initialisation** plays a crucial role in the network's performance [43]. A proper initialisation greatly enhances training convergence, whereas a poor one can hinder it [15]. These findings have sparked entire fields of research dedicated to exploring innovative approaches. Techniques such as self-supervised learning [19], knowledge distillation [12], and transfer learning
[44] have emerged as notable approaches. In contrast, our work focuses on initialisations without training during the weight generation.
Traditional initialisation methods such as He [15] and Xavier [11] initialisation were designed to avoid the problem of vanishing or exploding gradients. Despite their effectiveness, these approaches have **two significant weaknesses**. Firstly, they rely on simple pointwise distributions, which model one-dimensional variables. Doing so overlooks the direct connections between neural network weights and results in suboptimal initialisations. Secondly, they disregard important architectural information and knowledge from past trainings of similar architectures. This practice amplifies the already substantial financial and environmental costs associated with training networks on large datasets [38].
In Bayesian Neural Networks (BNN) [2, 9, 13, 20, 24, 37, 48], where neural networks are combined with stochastic modelling, researchers assume arbitrary Gaussian distributions over the weights [39]. However, the deep-weight-prior [1] stands out as an exception. It employs generative models to learn a _local_ distribution of trained weights, which is then utilised for initialising small BNNs. Their findings indicate that leveraging generative models for neural network initialisation leads to improved convergence.
Building upon the concept of leveraging generative models for initialisation, we present **two groups of new initialisation methods**:_local_ and _global_ initialisations address the limitations of traditional methods. They employ complex weight distributions, incorporate comprehensive architectural information, and leverage past training experiences. While they share the common goal of improving initialisation, they differ in the scope of initialisation, the architectural information considered, and the specific generative models used to learn weight distributions.
Figure 1: Traditional initialisation methods only consider layer dimensions, while being simple pointwise distributions. Utilising generative models, we consider significantly more architecture information and model complete weight sets by employing graph hypernetworks (GHN).
With _local_** initialisations**, we sample small groups of weights from learned distributions using variational autoencoders (VAE) [21, 23, 45]. In contrast, _global_**initialisations** obtain full weight sets conditioned on the network's architecture, accomplished through graph hypernetworks (GHN) [25].
We evaluate these methods on state-of-the-art deep convolutional neural networks (CNN), focusing on convergence speed, accuracy and ensembling. Our findings reveal that _global_ initialisations achieve **faster initial convergence and higher accuracy**. However, these benefits come with a potential trade-off: **reduced generalisation ability** of ensembles.
To deepen our understanding of this trade-off, we explore the diversity of the different initialisation methods. We identify missing diversity in GHN initialisations as the cause for its reduced generalisation ability. Motivated by this, **we propose a modification called the Noise GHN**, which introduces diversity into GHNs by modelling a non-deterministic distribution. This is achieved by injecting noise into the GHN decoder and using a modified loss function to encourage the production of diverse weight sets. The noise injection also increases model robustness, enabling better learning of essential features and preventing overfitting. Consequently, the Noise GHN might be able to transfer learned knowledge to different image distributions.
## 2 Related Work and Preliminaries
Variational autoencoders and graph hypernetworks are the backbone of our approach. We offer a concise overview of these frameworks, highlighting their essential aspects and functionalities relevant to our work.
#### 2.0.1 Variational Autoencoders
VAEs are generative neural networks designed to learn and capture the underlying distribution of a given dataset. Once trained, these networks can be effectively utilised to generate diverse forms of data, such as images [42, 45] or natural language [7, 33].
It consists of two main components: an encoder model \(Q_{\boldsymbol{\Phi}}(\mathbf{Z}|\mathbf{X})\) and a decoder model \(P_{\boldsymbol{\theta}}(\mathbf{X}|\mathbf{Z})\). The encoder maps input data to a latent space, while the decoder model reconstructs the input data from the latent space. VAEs are optimised by maximising a lower bound on the log-likelihood of the data. This lower bound is called evidence lower bound (ELBO). Doing so, they learn a meaningful latent space representation that captures the underlying data structure.
For a given data point \(x\), a prior \(P(\mathbf{Z})\) on the latent variables \(\mathbf{Z}\) and the parameters \(\boldsymbol{\Phi}\) and \(\boldsymbol{\theta}\) of the VAE, the ELBO can be expressed as:
\[\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\Phi}}(\mathbf{x})=\mathbb{E}_{q \boldsymbol{\varphi}(\mathbf{z}|\mathbf{x})}\left[\log p_{\boldsymbol{\theta}} (\mathbf{x}|\mathbf{z})\right]-\mathrm{KL}[Q_{\boldsymbol{\Phi}}(\mathbf{Z}| \mathbf{x})||P(\mathbf{Z})]. \tag{1}\]
The VAE's optimisation is driven by the expected value of the log density. It promotes the generation of outputs similar to the input. Simultaneously, the KL Divergence between the encoder model and the prior acts as a regularisation
term for the latent space. By default, we assume simple Gaussian distributions for the prior, encoder, and decoder models conditioned on the input. This choice offers the advantage of closed-form KL Divergence computation for equation 1, without significantly limiting the model's expressive capacity.
Introducing a discrete latent space can encourage the model to learn a more structured representation, as it has to navigate the limited capacity of such a space. To achieve this, Vector Quantisation (VQ) is employed. VAEs utilising VQ are called Vector Quantised Variational Autoencoders (VQVAE) [45]. There, continuous outputs from the encoder network are mapped to discrete points in the latent space using a codebook. To enhance convergence, a codebook loss is incorporated. The authors also suggest employing an autoregressive model called PixelCNN [29] alongside the VQVAE, which we denote as VQVAE*.
Although there are various VAE variations based on different encoder and decoder assumptions [3, 21, 27, 40], this work does not delve into exploring them. We study the VQVAE's ability to initialise neural networks.
#### 2.0.1 Graph HyperNetworks
The GHN [25, 49] is a generative model that produces complete sets of network weights. Unlike VAEs, which are limited to fixed-size data, a single GHN can be used with various network architectures. The resulting weights are immediately effective, achieving an impressive 58.6% accuracy on CIFAR-10 for a ResNet-50 architecture that it has never seen before. The GHN combines two key concepts: the hypernetwork [14] and the graph network [36].
A hypernetwork is a neural network that takes input and produces weights for another neural network. The inputs can include various information such as network architecture or data points. The hypernetwork is trained by directly backpropagating the loss of the predicted parameters into the hypernetwork.
The second key component is the graph network, which is specifically designed to process graph-based inputs with varying shapes. During the forward pass, a graph network updates the states of the nodes by propagating information along the edges of the graph. By representing network architectures as a computational graphs, the GHN leverages the graph network to obtain states for each node. These states are then used to generate weights for each layer by feeding them into the hypernetwork. Finally, the produced weights are normalised and adjusted to match the dimensions of each layer using slicing and tiling techniques.
To optimise the GHN \(H_{\mathbf{\theta}}\) with parameters \(\mathbf{\theta}\), we employ mini-batch optimisation over two sets of data: batches \(b\) of images and batches \(b_{m}\) of architectures:
\[\mathcal{L}=\sum_{i=1}^{b}\sum_{j=1}^{b_{m}}L\left(f\left(\mathbf{x}_{i},a_{j},H_{\mathbf{\theta}}(a_{j})\right),\mathbf{y}_{i}\right), \tag{2}\]
where \(f^{(k)}(\mathbf{x},a,\mathbf{w})\) represents the forward-pass of input \(\mathbf{x}\) through the network architecture \(a\) with weights \(\mathbf{w}\). The function \(L\) denotes the loss function used in the optimisation process.
Our approach differs from the authors [25] work, as they focus on generating deterministic weights with high performance in a single forward pass.
## 3 Methods
VAEs and GHNs capture expressive distributions, enabling us to leverage them for initialising state-of-the-art convolutional classification networks. Their key strength lies in their ability to incorporate architecture information and leverage knowledge gained from past trainings of similar architectures. We now explain further how we utilise these generative models.
### Local Initialisations
Our focus on _local_ initialisations revolves around capturing patterns within the weights of CNNs. As convolutional networks progress, they transition from extracting edges and colour blobs in early layers to capturing higher-level features in later layers [32]. Motivated by the specific attributes of filters like edge detectors, we assume that the weights within a CNN layer follow an unknown distribution.
Given a neural network architecture, we train a set of variational autoencoders, one for each layer, to learn the unknown local distributions. Once trained, these generative models enable the production of weights that adhere to the learned distributions.
We evaluate our _local_ initialisations on the well known architecture of the ResNet-20. For every layer, we learn the distribution of the \(3\times 3\) weight slices that constitute the convolutional kernels. To this end, we separately train 100 ResNet-20s on the training datasets. Afterwards we take the parameters that perform best on the corresponding validation sets. As advised in [1], we remove the slices with a low \(l_{2}\), by disregarding the slices whose \(l_{2}\) is in the lowest 5% for every layer. We call these datasets of weight-slices the _Weight-Datasets_. Finally, we train VAEs and VQVAEs on it to capture the underlying distributions.
### The Global Approach
We introduce the Noise GHN to transfer the ability to produce instantly performing network weights to ensembles. Furthermore we adapt the training routine of the GHNs to reduce training resources.
#### 3.2.1 Reducing GHN Training Resources
The DEEPNETS1M [25] dataset trains the GHN, featuring a million network architectures and validation/test sets of 500/500 architectures. Every iteration samples 64 images and eight architectures. This training setup is conducted over 300 epochs of the image dataset, which exceeds our computational resources.
To conserve resources, we reduce the number of training architectures from eight to three and train for only 30 instead of 300 image dataset epochs. To ensure the effectiveness of training, we select training architectures similar to the evaluation architecture of a ResNet-20. Specifically, we choose a ResNet-32, a ResNet-44, and a ResNet-56 and use all three architectures in every iteration.
To compensate for the reduced amount of training and training network diversity, we initialise the graph networks of the GHNs by using already trained weights, which are provided by Knyazev [25]. As we modify the hypernetwork for the Noise GHN, we initialise all GHN hypernetworks from scratch in order to ensure a fair comparison.
#### 3.0.1 Modified GHN
Since the GHN is a deterministic model when given a fixed architecture, its employment results in more similar weights in the trained ensemble members. As ensembles benefit from the diversity of their members, this lack of diversity poses a potential problem.
To achieve different weights in every forward pass, we integrate noise into the GHN, as shown in Figure 2. Specifically, we sample and append a noise vector to every final hidden state of the graph network. This design allows the hypernetwork to generate varied outputs, while learning to produce effective weights based on architectures encodings.
Additionally, we encourage diversity in the loss function by including a similarity loss into the overall loss function. To measure similarity, we first perform two forward passes of the Noise GHN for the same architecture. Then we calculate the similarity of predictions on a batch of images. Doing so, we ensure that the two produced weight sets correspond to two different functions.
Given a Noise GHN \(H_{\mathbf{\theta}}\), with parameters \(\mathbf{\theta}\), a network architecture \(a\), a sample \((\mathbf{x},\mathbf{y})\) from the dataset \(D=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\) and samples \(\xi_{1},\xi_{2}\) from a noise distribution, the similarity loss \(\mathcal{L}_{S}\) is calculated as:
\[\mathcal{L}_{S}\big{(}\mathbf{x},a,H_{\mathbf{\theta}},\xi_{1},\xi_{2}\big{)}= \text{CoSim}\Big{(}f^{(1)}\big{(}\mathbf{x},a,H_{\mathbf{\theta}}(a,\xi_{1}),f^{(2 )}\big{(}\mathbf{x},a,H_{\mathbf{\theta}}(a,\xi_{2})\big{)}\Big{)} \tag{3}\]
Figure 2: Functioning principle of the Noise GHN. The input, a network architecture, is expressed by a computational graph together with initial hidden states \(H^{0}\) for every node. The graph network propagates information through the computational graph resulting in final hidden states \(H^{T}\), encoding the function of each node. Every final hidden state \(h_{i}^{T}\) is then fed into a hypernetwork, together with a sampled noise vector. The hypernetwork is encouraged by the loss function to produce performant and diverse weights for every node. In a final step, the produced weights are normalised and fitted to the layer dimensions.
where \(f^{(k)}(\mathbf{x},a,\mathbf{w})\) represents the \(k\)-th forward-pass with input \(\mathbf{x}\) into the network architecture \(a\) with weights \(\mathbf{w}\), \(L\) some loss function and CosSim the _Cosine similarity_.
The overall loss function \(\mathcal{L}\) for a mini-batch \(b\) of images and the three training architectures can then be expressed as:
\[\mathcal{L}=\sum_{i=1}^{b}\sum_{j=1}^{3}\Bigg{[}\sum_{k=1}^{2}\Bigg{[}L\Big{(} f^{(k)}\big{(}\mathbf{x}_{i},a_{j},H_{\boldsymbol{\theta}}(a_{j},\xi_{k})\big{)}, \mathbf{y}_{i}\Big{)}\bigg{]}+\mathcal{L}_{S}\big{(}\mathbf{x}_{i},a_{j},H_{ \boldsymbol{\theta}},\xi_{1},\xi_{2}\big{)}\Bigg{]}. \tag{4}\]
The conceptual differences between the _local_ and the _global_ initialisations overall can be explained in two dimensions, as seen in Figure 1. First, _global_ initialisations utilise a distribution which enables initialisation of all weights in a meaningful connected way - as opposed to initialising all weights independently from another. As the GHN is trained to produce already working weights, the weights are already synchronised. On the contrary, the _local_ initialisations focus on small weight groups, who are initialised independently from another. Secondly, the _global_ initialisations condition the weights on the whole architecture, while the _local_ initialisations confine themselves to information about layer position.
## 4 Experiments and Results
We evaluate various network initialisations on CIFAR-10 [26] and the medical PatchCamelyon [46] (PCam) dataset to assess accuracy and convergence. Additionally, we evaluate ensemble accuracy and calibration on the out-of-distribution (OOD) CIFAR-C [17] dataset. Finally, we investigate the Noise GHN's generalisation ability from natural images to the medical domain. For more detailed information on the datasets and experimental setups, please refer to the supplementary material.
#### 4.0.1 Convergence Speed
We study convergence speed resulting from different initialisations by evaluating every 300 batches on PCam and every epoch on the CIFAR-10 dataset. Results are averaged over 25 trainings per initialisation. Comparing the convergence speed, we measure the steps to reach specified accuracy thresholds. PCam's thresholds are 0.80 and 0.85, while CIFAR-10's are 0.65 and 0.75. The visualisations can be found in Figure 3, while more detailed results, along with an explanation of our threshold selection, are available in Table 1 and Figure 1 in the supplementary material.
Our _global_ initialisation consistently outperforms other methods in early training. On the PCam dataset, GHN-initialised ResNet-20s reach the first threshold after the initial evaluation step, while other initialisations require five or more steps. The GHN initialisation consistently outperforms others, achieving the second thresholds almost twice as fast.
**The _global_ scope** for information and initialisation enhances the GHN's initial convergence, reaching thresholds **up to 5 times faster**. However, there is no
noticeable difference between pointwise standard initialisations and _local_ initialisations. Information about the target layer and initialising small weight groups shows no training improvement.
Since the learning rate scheduling is the same for all initialisations, we cannot assess how the accelerated initial convergence affects the overall training time. Despite not reaching state-of-the-art accuracies during training due to our training routine, this experiment yields promising results. Therefore, further research in this direction holds great potential.
#### 4.2.2 Accuracy
We assess test set accuracies of ResNet-20s with different initialisations, training 25 networks for each. Figure 4 shows the resulting accuracies. The _Global_ initialisation on PCam outperforms others, with **all GHN initialisations** achieving **higher accuracy**. On CIFAR-10, GHN initialisation achieves a higher median accuracy but by a smaller margin. However, no notable differences appear between traditional and _local_ initialisations.
The _global_ scope for information and initialisation leads to faster convergence and higher test set accuracy. Conversely, there are no significant performance differences between _local_ and standard initialisations. The payoff doesn't increase proportionally with expanded scope and information. This can be explained by a key advantage of our global GHN initialisation, which produces instantly performing and synchronised weight sets. The importance of this **synchronicity for fast initial convergence and higher accuracy** is a crucial insight, highlighting the potential benefits of employing generative models for initialisations.
#### 4.2.3 OOD Ensembling
Accurate initialisations and fast convergence are crucial in the context of ensembling. Ensembles provide higher accuracy and improved uncertainty estimation. These advantages are particularly important for OOD data. To examine the effect of different initialisations on ensembles, we evaluate their expected calibration error (ECE) and accuracy. Note that a lower ECE
Figure 3: Trajectories of validation accuracy during the initial training phase. GHN initialisations outperform other initialisation with a faster initial convergence.
indicates better calibration. We provide a detailed explanation of its calculation in the supplemental material.
The ensembles consist of the 25 trained ResNet-20 networks per initialisation. We sample 20 ensembles with 5 members each and calculate the ECE across all 5 corruption levels of the CIFAR-C dataset. We present the results in Figure 5. The corresponding accuracy results can be found in Figure 2 in the supplementary material, showing similar trends.
The VAE and VQVAE* based initialisations yield the lowest ECE values. Surprisingly, the _global_ GHN initialisation shows the highest ECE value, despite its faster convergence and higher accuracy. High accuracy does not necessarily guarantee good OOD performance. Nevertheless, the magnitude of the performance shift is surprising. This discrepancy is further investigated in the next section.
#### 4.2.2 Prediction Similarities
Ensemble uncertainty estimation and accuracy rely on member diversity [8]. We analyse this diversity by examining prediction similarity and cosine similarity of logits on the CIFAR-10 test set. Figure 6 illustrates pairwise similarities, and average scores, computed over the strict upper diagonal matrix. Lower similarity scores are preferable, as they indicate higher diversity.
**Standard** and _local_ initialisations exhibit similar diversities in both measures, while **GHN** initialisations show significantly lower diversity. The GHN produces the same set of weights for a fixed architecture, which results in more similar trained weights, reducing diversity.
To address this, our Noise GHN introduces diversification. We input a sampled vector into the Noise GHN decoder, and the modified loss function encourages low cosine similarity logits, ensuring diverse weights in each forward pass.
The effect is evident on the right-hand side of Figure 6. The Noise GHN initialisation yields the lowest cosine similarity and slightly improves on the GHN in
Figure 4: Boxplots displaying the resulting test set accuracies of 25 ResNet-20s per initialisation. The GHN results the highest median accuracy on both datasets, especially on the PCam dataset where all 25 GHN initialisations outperform every other initialisation.
prediction similarity. Notably, only the Noise GHN initialisation achieves cosine similarity scores below 0.8.
The Noise GHN significantly enhances diversity, leading to a clear impact on the OOD ECE, as shown in Figure 7. It consistently outperforms the GHN across all five corruption levels.
Furthermore, the Noise GHN retains the advantages of global initialisation, surpassing the convergence speed achieved by GHN initialisation, as seen in Table 1 in the supplementary material. This improvement is due to the sampled noise, which enhances weight robustness, akin to techniques like Monte-Carlo dropout.
#### 4.2.2 Knowledge Transfer
We explore the ability of our methods to transfer knowledge between image distributions, particularly between CIFAR-10's natural images and PCam's medical data. Such knowledge transfer can be especially beneficial for low-data tasks. To this end, we consider a dataset of 1000 training patches from PCam, with additional 400 validation patches for model selection and evaluate on a test dataset of a separate 10.000 patches. We call this downscaled dataset PCam Small. All splits underlie a strict 50%/50% split of tumorous and benign patches. After initialisation the ResNet-20s are trained for 40 epochs using our standard procedure. To account for the different dataset sizes, we adapt the learning rate schedule: After 20 epochs, the learning rate is halved, and after 30 epochs, the learning rate is reduced by a factor of five.
We compare the results to two types of baselines: He initialisation of a ResNet-20 and pretraining on CIFAR-10 for different amounts of epochs. We compare the performance of these baselines to a GHN and Noise GHN, who have been trained on the CIFAR-10 dataset.
Figure 5: Boxplots of the resulting ECE of 20 ensembles consisting of each five members on the OOD CIFAR-C dataset.
Figure 7 displays the results, showing overall weaker performance compared to the training on PCam due to the smaller dataset and distribution shift. The Noise GHN outperforms other initialisations, supported by a significant t-test result (p\(<\) 0.0001) against all other initialisations. This result can be attributed to the noise injection, making it more robust and preventing overfitting. This experiment indicates Noise GHN's potential for knowledge transfer between different image distributions. For a comparison of all initialisation methods in this work, please refer to Figure 3 in the supplementary material.
Our results align with Knyazev's findings [25], showing GHN parameters can be fine-tuned on small, different datasets. However, their experiments include a less severe distribution shift. Our experiment's weaker GHN performance might be attributed to the changes we made in the training setup. Nevertheless, further research is required to fully understand the reasons behind it.
#### 4.2.2 Limitations
Our work is focused on evaluating different generative models with regard to their potential for the initialisation of networks. As a first step, we evaluate these new initialisation methods on in-distribution data. In comparison to the training of a single network, this setup of training and employing on the same dataset is very costly. However, the potential savings when applying our methods to various datasets would justify these costs.
Due to the immense computational power required for the training of the GHN by Knyazev [25], we downscaled the training setup. Thus, a meaningful comparison to the original network is not possible.
We train and evaluate our methods on two in-distribution datasets, one OOD dataset and one architecture. Additional experiments are needed to verify, if the advantages extend to different datasets and network architectures.
Currently, the Noise GHN falls short of the CIFAR-C ensemble performance
Figure 6: Pairwise similarities of trained weights for different initialisations. Lower similarity score (dark blue) is better, as it implies a higher diversity. The GHN obtains the highest similarity measures, indicating missing diversity. The Noise GHN performs better than the GHN in terms of both similarity measures and all other initialisations in terms of their cosine similarity.
of _local_ and standard initialisations. Nevertheless, we believe future large-scale experiments can further unlock its potential.
## 5 Conclusion
We explore incorporating additional knowledge into initialisations. Our study demonstrates that initialisations based on learned weight distributions with a _global_** scope can offer significant advantages**. We show these advantages in terms of convergence speed and accuracy for feed-forward convolutional neural networks. We identify the key factor for the benefits of the global initialisations to be the **synchronicity of the produced weights**. In contrast, _local_ initialisations are missing synchronous produced weights.
This work also reveals that deterministic _global_ initialisations result in weaker OOD ensembling accuracy and calibration. However, the **introduction of the Noise GHN improves diversity and performance**. Further generalisation of our approach is especially intriguing due to the potential environmental and economic cost savings. We provide the groundwork for utilising _global_** initialisations in the area of ensembling** and show that the Noise GHN might be able to transfer learned knowledge to different image distributions. We showcase their strengths on single networks and introduce non-deterministic global initialisations.
Figure 7: Comparison of the GHN and Noise GHN. Left: Due to the diversity of the Noise GHN it improves upon the GHN over all levels of corruption. Right: The Noise GHN is able to transfer relevant knowledge from CIFAR-10 to the PCam Small dataset and outperform all other initialisations. We abbreviate pretraining on CIFAR-10 for five epochs as ”PT 5” and similarly for other epochs. |
2309.01063 | Semi-supervised 3D Video Information Retrieval with Deep Neural Network
and Bi-directional Dynamic-time Warping Algorithm | This paper presents a novel semi-supervised deep learning algorithm for
retrieving similar 2D and 3D videos based on visual content. The proposed
approach combines the power of deep convolutional and recurrent neural networks
with dynamic time warping as a similarity measure. The proposed algorithm is
designed to handle large video datasets and retrieve the most related videos to
a given inquiry video clip based on its graphical frames and contents. We split
both the candidate and the inquiry videos into a sequence of clips and convert
each clip to a representation vector using an autoencoder-backed deep neural
network. We then calculate a similarity measure between the sequences of
embedding vectors using a bi-directional dynamic time-warping method. This
approach is tested on multiple public datasets, including CC\_WEB\_VIDEO,
Youtube-8m, S3DIS, and Synthia, and showed good results compared to
state-of-the-art. The algorithm effectively solves video retrieval tasks and
outperforms the benchmarked state-of-the-art deep learning model. | Yintai Ma, Diego Klabjan | 2023-09-03T03:10:18Z | http://arxiv.org/abs/2309.01063v1 | # Semi-supervised 3D Video Information Retrieval
###### Abstract
This paper presents a novel semi-supervised deep learning algorithm for retrieving similar 2D and 3D videos based on visual content. The proposed approach combines the power of deep convolutional and recurrent neural networks with dynamic time warping as a similarity measure. The proposed algorithm is designed to handle large video datasets and retrieve the most related videos to a given inquiry video clip based on its graphical frames and contents. We split both the candidate and the inquiry videos into a sequence of clips and convert each clip to a representation vector using an autoencoder-backed deep neural network. We then calculate a similarity measure between the sequences of embedding vectors using a bi-directional dynamic time-warping method. This approach is tested on multiple public datasets, including CC_WEB_VIDEO, Voutube-8m, S3DIS, and Synthia, and showed good results compared to state-of-the-art. The algorithm effectively solves video retrieval tasks and outperforms the benchmarked state-of-the-art deep learning model.
3D Video Information Retrieval, Video Similarity Search, Unsupervised Learning, Semi-supervised Learning, Convolutional and Recurrent Neural Networks, End-to-end auto-encoder
## I Introduction
In recent years, the exponential growth in online video data has made efficient retrieval of visually similar videos increasingly challenging. To address this, we propose a novel semi-supervised deep learning framework for retrieving similar 2D and 3D videos.
Our video retrieval approach represents videos as sequences of embedding vectors generated by a deep neural network encoder. It begins by splitting both candidate and query videos into fixed-length consecutive clips. Each clip is fed into the encoder to produce a representation vector. The sequence of embedding vectors for the full video is then compared to candidate videos using a bidirectional dynamic time warping similarity measure. By transforming videos into informative embedding sequences, and leveraging deep neural networks with dynamic time warping, our framework can effectively retrieve visually similar 2D and 3D videos.
A video embedding is created through the use of deep convolutional and recurrent neural networks, which are designed to extract rich and discriminative features from video clips. To optimize the video retrieval performance, we adopt a two-stage training approach. First, we pre-train the model unsupervised on a larger and relevant video dataset. Second, we fine-tune the model with a triplet loss function in a supervised manner, further enhancing its ability to perform video retrieval.
Finally, we compute the similarity between the sequences of embedding vectors between the query video and candidate videos using a variant of the dynamic time warping method. The bi-directional Dynamic Time Warping (Bi-DTW) method has been employed as a means to address the limitations of the standard Dynamic Time Warping (DTW) in video embedding matching. The standard DTW algorithm, although effective at time series alignment, operates only in a single, forward direction. This unidirectional characteristic can potentially limit the accuracy of its matching results, particularly in applications such as video retrieval where the temporal structure of the data is complex and multidimensional. To mitigate these limitations, Bi-DTW was developed, with a distinguishing feature being its capacity to facilitate matching from both forward and backward directions. The rationale for this approach is rooted in the intuition that different portions of a video may align more effectively when approached from various temporal perspectives. Therefore, by integrating both forward and backward alignments, Bi-DTW improves the accuracy of video retrieval by ensuring a more robust, comprehensive temporal match. It presents a significant improvement over the conventional DTW, especially in complex, time-structured applications like video retrieval where the objective is to maximize the accuracy of the matching results. This allows us to determine the degree of similarity between the two videos, and effectively rank the
Fig. 1: The proposed video representation learning pipeline first splits videos into short clips at a fixed number of frames. Then the proposed method converts clips to embedded feature vectors by a deep neural network encoder. Finally, we store all feature vectors in a database and use a bi-directional dynamic time-warping method to retrieve a list of candidates.
candidate videos based on their relevance to the query video.
We evaluate the proposed method on several publicly available datasets, including CC_WEB_VIDEO, Youtube-8m, S3DIS, and Synthia, and show good results in comparison to state-of-the-art.
### _Contributions_
We present a cutting-edge approach to video information retrieval by introducing a bi-directional dynamic time-warping method for determining the similarity between video inquiries. This innovative technique effectively tackles the temporal dimension of videos, resulting in improved accuracy and efficiency of the retrieval process.
Furthermore, this research encompasses the handling of 3D video inquiry, introducing both a novel 3D network architecture that expands upon 2D video information retrieval models and a method to incorporate 3D video data as an additional depth layer. This comprehensive framework offers a more robust solution for retrieving 3D video data.
We also introduce a sample retraining method that effectively addresses the challenge of handling difficult video pairs during the training phase. This method increases the number of previously under-studied data points, ultimately leading to improved performance and accuracy.
The proposed method in this paper demonstrates a remarkable advancement in video information retrieval, offering the potential to significantly enhance video search and retrieval systems across a wide range of applications. Our findings make video search more accessible and valuable to a broad range of users.
## II Related Works
The field of video information retrieval has been the subject of extensive research in recent years, due to the growing demand for effective and efficient methods for searching and retrieving video content. A wide range of techniques and approaches have been proposed to address the challenges associated with video retrieval, including traditional methods based on feature extraction and machine learning, as well as more recent deep learning-based methods. The literature in this area is vast, encompassing a wide range of topics, including the representation and encoding of video data, the development of effective similarity measures, and the handling of temporal and 3D information in video data. In this literature review, we provide an overview of the current state-of-the-art in video information retrieval, highlighting key contributions and developments in the field, and pointing to similarities and differences with our work.
The foundation of seq2seq models is simple neural network encoder and decoder recurrent models, such as LSTMs [1] and GRUs [2]. Studies on the hierarchy for these encoders show that the better encoder networks in the model should lead to better results [3]. The proposed model builds upon this foundation, further advancing the encoder networks to enhance the overall performance. Consequently, in this work, we improve the state-of-the-art in this area by developing a deep convolutional and recurrent model using recent developments in the related areas.
**Transformer** architecture has been increasingly popular in recent years. It was applied to understand video data and perform video retrieval. The transformer, initially proposed for language understanding tasks, is a powerful architecture capable of capturing long-range dependencies and handling sequential data. One of the key features of the transformer is the self-attention mechanism, which allows the model to weigh the importance of different parts of the input data when making predictions. Several studies have applied transformer-based architectures to video understanding tasks such as action recognition, video captioning, and video retrieval. For example, Girdhar proposes a transformer-based architecture for video action recognition [4]. Similarly, Im and Choi propose a transformer-based model for video captioning [5]. The transformer has proven to be particularly useful in video understanding tasks, where the model needs to understand the relationships between different frames in a video. Our proposed method builds on the strengths of the transformer and further improves upon it by utilizing bi-directional dynamic time-warping to enhance the accuracy of video retrieval.
**Video Retrieval.** There is a variety of retrieval tasks and definitions in the multimedia community concerning the video retrieval problem. These vary with respect to the degree of similarity that determines whether a pair of videos are considered related and range from Near-Duplicate Video Retrieval (NDVR) with a very narrow scope where only almost identical videos are considered positive pairs [6], to very broad where videos from the same event [7] in Event Video Retrieval (EVR) or with the same semantics [8] are labeled as related. In the copy detection problem, given a query video, only videos containing nearly identical copies of it should be retrieved. Similar videos from the same incident should be considered irrelevant in such a scenario. On the other hand, problems such as news-oriented retrieval have radically different needs. Deep learning methods have been applied to certain applications such as face video retrieval [9] or as a general method [10]. As an extension to the 3D case, Deng has proposed a pipeline to handle the 3D video data for retrieval purposes [11]. However, there does not seem to be a strong consensus among researchers about the cases where the videos are all unlabeled, and the task is to retrieve videos sharing scenarios and semantic meanings. While there are a variety of retrieval tasks and definitions in the multimedia community, the proposed approach is a unified framework that can handle both 2D and 3D cases, therefore we focus on maximizing accuracy in video retrieval tasks where the temporal structure of the data is complex and multidimensional.
**Convolutional Neural Networks (CNN's).** CNNs have been successfully applied to many tasks related to similarity video search [12, 13, 14, 15]. Unlike Deep Neural Networks, CNNs effectively exploit the structural locality in the spectral feature space. CNNs use filters with shared weights and pooling to give the model better spectral and temporal invariance properties. It typically generates more stable features compared to DNNs. The recent deep CNNs also show their superiority in image-related tasks compared to previous CNNs. Deep CNNs have been used for many tasks relating to video, such as predicting the next frame [16], learning invariant features from video [1, 17] or video classification [12]. Many training and modeling tricks, such as Residual Network [18], have been developed to enable training for such deep networks. While CNN's have been applied to many tasks related to similarity video search,
the proposed model takes advantage of bi-directional dynamic time-warping and recurrent neural network to better capture the temporal nature of the videos and improve the retrieval process.
**Traditional Image Retrieval** has been extensively explored as query by example or near-duplicate detection with high potential for the medical community [19, 20]. In literature, many research publications in image retrieval are based on non-deep learning methods. A competition for image-based retrieval was organized between 2004 and 2013. This case differs from the one addressed in this work because they were defined with 1-7 sample images accompanied by text. In the 2013 edition [21], the best textual run by Herrera [22] achieved the same performance as the best technique using both textual and visual features. As in previous years, visual-only approaches achieved much lower performance than textual and multimodal techniques. The best visual-based solution is based on the Color and Edge Directivity Descriptor (CEDD), a fuzzy color and texture histogram, and a Color Layout Descriptor [23]. Content-based image retrieval in the medical domain has been addressed from low-level wavelet-based visual signatures [24] to high-level concept detectors [25]. However, these methods are designed explicitly with specific domain knowledge. The proposed model extends these techniques to the video retrieval problem and further enhances them with bi-directional dynamic time-warping.
**Auto-encoder** has a long history [26] of pre-training artificial neural networks and is widely used in recent models [27]. Although this concept is rarely used by other deep learning models in this area, we found it to be fundamentally important for our semi-supervised learning purpose. In the proposed approach, we leverage this concept for semi-supervised learning in video retrieval tasks.
**Representation Learning.** Much of computer vision is about learning the representation, such as learning high-level image classification [28], object relationships [29], or point-wise correspondences [30, 31, 32]. However, there has been relatively little work on learning representation for aligning the content of different videos. In this work, we identify similar videos in the dataset, which is essentially aligning the content of two videos in a self-supervised manner, and do the automatic alignment of the visual data without any additional supervision. Although much of computer vision involves learning representation, we extend this notion to align the content of different videos in a self-supervised manner, thanks to the proposed bi-directional dynamic time-warping method.
**ConvLSTM.** Shi introduces convolutional LSTM (ConvLSTM) as an extension to the original LSTM, where the inner product is replaced by convolution operation in both input-to-state and state-to-state transitions [33]. ConvLSTM effectively maintains the structural representations in the output and cell state. It has been shown to be a better tool than a fully connected LSTM layer for maintaining structural locality and more prone to overfitting. Besides, it reduces the number of parameters within the layer and enables potential more computations for better generalization. The ConvLSTM has been a substantial tool in maintaining structural locality and preventing overfitting. However, in the research, we utilize the bi-directional dynamic time-warping approach, offering a more comprehensive temporal match and enhancing the video retrieval accuracy.
## III Model
This paper unveils a different deep learning model tailored for both 2D and 3D similar video retrieval, marking advancements in the field. Our model is founded on a customized autoencoder structure, uniquely incorporating Convolutional LSTM (ConvLSTM), residual connections, and transformer blocks to form an efficient 2D sequence-to-sequence autoencoder.
Within the presented architecture for 2D video retrieval, it is imperative to discern between the established methodologies and novel introductions that are part of our research contribution. Starting off, Block R, which utilizes the ConvLSTM layer, is a traditional approach and is not new. In contrast, we are the first one to propose the LRBP block. The rationale behind adopting a bi-directional version of the ConvLSTM in the LRBP block is to adeptly capture the temporal dynamics in video data, especially those that might possess a reversible nature. Our empirical findings corroborate that the LRBP block manifests a superior performance relative to its non bi-directional counterpart, which is epitomized by the URB block. While the URB and UQB blocks remain a typical technique in employing ConvLSTM for video data, we introduce the amalgamation of the Quasi 4D CNN within an autoencoder structure through the UQB block. This innovation holds promising potential in transforming how visual temporal structures are perceived. Meanwhile, the UTB block, which leverages the transformer for embedding processing, does not offer a fresh perspective within our framework. We are the first to combine the blocks LRBP and UQB in this form.
### _2D Seq2seq Autoencoder_
This paper proposes a 2D sequence-to-sequence autoencoder for similar video retrieval. We first define several basic neural network modules as blocks to simplify the explanations. Block R, shown in Figure (a)a, consists of a convolutional long short-term memory (ConvLSTM) layer with a residual connection and a LeakyReLU activation function. The LRBP block, defined in (b)b, connects a block R with a pooling layer and a batch normalization layer. The encoder uses this block to compress the input into a lower dimension. The URB block, defined in (c)c, connects an up-sampling layer with block R and a batch normalization layer. The decoder uses this block to restore the embedding to a higher dimension. There is no residual connection between the layers of the encoder and decoder, which ensures the embedding vector after the encoder is a representation of the input video. To improve the generalization of the proposed model and enhance training speed, we include a residual connection within each block of LRBP and URB.
In addition to the URB block, we propose two other types of blocks to support the decoder: the UQB block and the UTB block. These blocks are used to handle the extra dimensionality in the 3D case. The UQB block replaces the ConvLSTM layer in the URB block with a Quasi 4D CNN layer, and the UTB block replaces the ConvLSTM layer with a transformer layer.
We trained the autoencoder to recover the original video and used the following loss function during training. In the autoencoder architecture, we have used a combination of the LRBP block in the encoder and the URB block in the decoder,
as shown in Figure 3. Moreover, Figure 4 illustrates the comprehensive function of the proposed autoencoder architecture
\[L_{\text{autoencoder}}=\mathcal{L}\left(v,h_{\tau}(f_{\theta}(v))\right) \tag{1}\]
where \(\mathbf{v}\) is an input video, \(f_{\theta}(\cdot)\) is the encoder, \(h_{\tau}(\cdot)\) is the decoder and \(\mathcal{L}(\cdot,\cdot)\) represents a loss function. Hence \(h_{\tau}(f_{\theta}(v))\) is the recovered video generated by the autoencoder.
In our 2D video retrieval architecture, the LRBP and UQB blocks stand out as our primary contributions. The LRBP block employs an advanced bi-directional ConvLSTM, adeptly capturing intricate video dynamics. In contrast, the UQB block, incorporating its Quasi 4D CNN, brings forward a unique representation technique. This UQB innovation is not limited to 2D but will be expanded upon in 3D contexts. Collectively, these innovations significantly enhance video embedding, optimizing the retrieval process.
### _3D Seq2seq Autoencoder_
We also propose a 3D sequence-to-sequence autoencoder, an extension of the 2D sequence-to-sequence autoencoder model, which can be trained unsupervised for pretraining and fine-tuning with a supervised dataset. We are going to define three different variants for the proposed 3D model, namely M1-3D, M2-3D, and M3-3D.
In Table I, we describe the 2D and 3D architectural differences among the six proposed 2D and 3D models. This block-based framework allows flexibility in the architecture and enables us to experiment with different combinations of blocks to achieve the best performance for similar video retrieval. The major difference between these models is: M1 is the 2D baseline model, M2 adds a Quasi 4D CNN layer in the decoder, M3 adds a transformer unit in the decoder; M1-3D is the 3D baseline model using a 3D ConvLSTM cell in the autoencoder's encoder and decoder parts. This allows the model to effectively extract the important features from the 3D video data and convert them into a compact and informative embedding for similar video retrieval. The second variant, M2-3D, replaces the 3D ConvLSTM cell with a Quasi 4D CNN layer for handling the 3D video reconstruction. The last variant, M3-3D, uses a transformer unit in the decoder.
For the 3D models, a series of unique building blocks are employed: L3RBP, R3BP, and U4DB. The L3RBP is our signature contribution, introducing a bi-directional connection
Fig. 4: Illustrates how the encoder and decoder work together as an autoencoder, using LRBP and URB blocks as examples. The number of LRBP and URB blocks can be adjusted for better performance.
Fig. 5: These figures show the model components for the 3D cases. (a) Shows the structure of block L3RBP. (b) Shows the structure of block R3BP. (c) Shows the structure of block U4DB.
Fig. 3: (a) Shows the encoder architecture. (b) Shows the decoder architecture. The input video has three frames and the representation has three vectors.
Fig. 2: These figures show the model components for the 2D cases. (a) Shows the structure of block R. It contains a residual connected ConvLSTM layer and LeakyReLU activation function. (b) Shows the structure of block LRBP. (c) Shows the structure of block URB. (d) Shows the structure of block UQB. (e) Shows the structure of block UTB.
to the 3D ConvLSTM. This innovative structure is tailored explicitly for video inquiry, and it stands as a natural 3D progression of the LRBP block that we pioneered for 2D scenarios within this paper. On the other hand, the R3BP block harnesses the 3D ConvLSTM layer in a manner that aligns more with conventional techniques of video data processing. The U4DB block, conceptualized as the 3D iteration of the UQB block, primarily hinges on the 4D CNN layer. While the underpinning 4D CNN mechanism has been explored by others in the realm of video data, our approach offers a unique blend that synergizes seamlessly with our overarching model architecture. We are the first to combine the block L3RBP in such a form.
In the realm of 3D video retrieval, the new models introduce innovative adaptations that advance the field in two significant ways. Firstly, compared to 2D models, the 3D models harness the added depth dimension of videos to provide richer and more accurate representations of their content. This is achieved via a novel incorporation of 3D Convolutional LSTM cells, 3D CNN and Quasi 4D CNN layers into the autoencoder's encoder and decoder structure. The adaptation captures temporal correlations across frames more efficiently, leading to a more informative embedding for video similarity retrieval.
The second innovation is a distinct approach in dealing with the challenges posed by other state-of-the-art 3D video retrieval methods. While many existing techniques struggle to maintain performance with increasing video dimensionality, the proposed models, especially M2-3D and M3-3D, demonstrate robustness in handling higher-dimensional data. M2-3D substitutes the 3D ConvLSTM cell with a Quasi 4D CNN layer, introducing a novel way to manage 3D video reconstruction. The M3-3D model introduces a transformer unit in the decoder, a significant leap that helps model long-range temporal dependencies, offering superior performance in similarity retrieval tasks.
Conclusively, these enhancements furnish our 3D models with the capability to set pioneering standards for similar video retrieval. Their robust architecture and performance not only outshine the 2D models but also firmly position them at the forefront, rivaling other contemporary methods in the domain.
## IV Algorithm
This section presents the problem setting of our video embedding learning problem. We define a distance metric for video embeddings that a triplet loss can train. We then propose the bi-directional dynamic time warping algorithm to convert the embedding into a distance metric.
### _Problem Setting_
We begin with addressing the problem of learning a pairwise similarity function for similar video retrieval from the relative information of pair/triplet-wise video relations. For a given query video and a set of candidate videos, the goal is to compute the similarity between the query and every candidate video and use it for ranking the entire set of candidates in the hope that similar videos are retrieved at the top ranks. To formulate this process, we define the similarity between two arbitrary video clips \(q\) and \(p\) as the squared Euclidean distance in the video embedding space.
\[D(f_{\theta}(q),f_{\theta}(p))=\|f_{\theta}(q)-f_{\theta}(p)\|_{2}^{2} \tag{2}\]
where \(f_{\theta}(\cdot)\) is the embedding function that maps a video to a point in a Euclidean space, and \(\theta\) are the system parameters. Additionally, we define a pairwise indicator function \(I(\cdot,\cdot)\), specifying whether a pair of videos is near-duplicated. Formally, \(I(q,p)=1\) if \(q,p\) are NDVs (near-duplicate videos) and 0 otherwise.
The proposed model's objective is to learn an embedding function \(f_{\theta}()\) that assigns smaller distances to similar video pairs compared to non-similar ones. Given a video with feature vector \(v\), a similar video with \(v+\) and a dissimilar video with \(v-\), the embedding function \(f_{\theta}(\cdot)\) should map video representations to a common space \(R^{d}\), where \(d\) is the dimension of the feature embedding, in which the distance between query \(v\) and positive \(v+\) is always smaller than the distance between the query \(v\) and negative \(v-\):
\[D(f_{\theta}(v),f_{\theta}(v^{+}))<D(f_{\theta}(v),f_{\theta}(v^{-})), \tag{3}\]
where \(I(v,v^{+})=1\) and \(I(v,v^{-})=0\) for all \(v,v^{+}\) and \(v^{-}\).
### _Triplet Loss_
We use triplet loss to train the model and learn the above distance mapping function as a neural network. We define a collection of \(N\) training instances in the form of triplets \(T=\{(v_{i},v_{i}^{+},v_{i}^{-}),i=1,...,N\}\) where \(v_{i},v_{i}^{+},v_{i}^{-}\) are feature vectors of a video, a similar positive video clip, and a negative video clip. A triplet expresses a relative similarity order among the three videos. We define the following hinge loss function for a given triplet:
\[L_{\theta}(v_{i},v_{i}^{+},v_{i}^{-})=\max\{0,D(f_{\theta}(v_{i}),f_{\theta}( v_{i}^{+}))-D(f_{\theta}(v_{i}),f_{\theta}(v_{i}^{-}))+\tau\} \tag{4}\]
where \(\tau\) is a margin parameter to ensure a sufficiently large difference between the positive and negative distances. This margin parameter also affects how the model is penalized if there is a violation for the desired triplet distance property. Finally, we use batch gradient descent to optimize the objective function described as triplet loss
\[\min_{\theta}\sum_{i=1}^{N}L_{\theta}(v_{i},v_{i}^{+},v_{i}^{-})+\lambda\| \theta\|_{2}^{2} \tag{5}\]
where \(\lambda\) is a regularization parameter to prevent over-fitting of the model, and \(N\) is the total size of a triplet mini-batch. This triplet loss is also visualized in Figure 6. Minimizing this loss should narrow the query-positive distance while widening the query-negative distance, and thus lead to a representation satisfying the desirable ranking order. With an appropriate triplet generation strategy in place, the model should eventually
\begin{table}
\begin{tabular}{l l l l l} \hline Model No. & 2D/3D & Encoder & Decoder & Diff \\ \hline M1 & 2D & 3 LRBP & 3 URP & 2D Baseline \\ M2 & 2D & 3 LRBP & 3 UQP & + Quasi 4D CNN \\ M3 & 2D & 3 LRBP & 3 UIP & + Transformer \\ \hline M1-3D & 3D & 3 LRBP & 3 R3BP & 3D Baseline \\ M2-3D & 3D & 3 L3RBP & 3 U4DB & + Quasi 4D CNN \\ M3-3D & 3D & 3 L3RBP & 3 UIP & + Transformer \\ \hline \end{tabular}
\end{table} TABLE I: Comparison of encoder and decoder blocks in all models
learn a video representation that improves the effectiveness of the relevant video retrieval solution.
### _Training Scheme_
We assume we have a training set of videos \(\{v_{i}|i\in X\}\) together with object classification already performed on a subset \(X_{c}\subset X\). To this end, each \(\{v_{i}|i\in X_{c}\}\) has the corresponding class. As described in Algorithm 1, we use a four-step training scheme to optimize the performance of the proposed autoencoder-based video retrieval model. In step 2, a single sample \(i\in X_{c}\) can yield multiple triplets and thus the need for \(\tilde{X}_{c}\). Step 4 intentionally increases the sample number of under-studied data to encourage the model to learn better on the difficult part of the dataset. The output of the algorithm is the semi-supervised trained encoder.
### _Video Similarity Search_
Note that each video is represented as a sequence of frames or clips. We denote a video as \(v\), and each frame as \(z_{i}\), where \(i\in M\) and \(M\) is the number of frames in the video. Alternatively, we can represent a video as a sequence of clips, denoted as \(\textbf{p}_{i}\), where \(i\in N_{x}\) and \(N_{x}=\lfloor\frac{M}{K}\rfloor\). The clips can be formed in two ways: disjoint consecutive frames or overlapping consecutive frames.
We then compute an embedded representation vector, \(v^{p}\), for each clip of video \(v\) by mapping it to a vector using the encoder. We use a dynamic time-warping (DTW) method to measure similarity between two videos. This method is commonly used in time series analysis to measure the similarity between two temporal sequences. The DTW distance between video clips \(v_{1}\) and \(v_{2}\) is solved as a dynamic programming problem, with the core state transfer formula being
\[D_{i,j}:=\|v_{1}^{i}-v_{2}^{j}\|_{2}^{2}+min(D_{i-1,j},D_{i,j-1},D_{i-1,j-1}). \tag{6}\]
In order to provide a comprehensive solution to the video embedding matching problem, we expand on the traditional dynamic time warping (DTW) algorithm by proposing the bi-directional dynamic time warping (Bi-DTW) algorithm. Bi-DTW is distinguished by its ability to execute matching from both forward and backward directions, leading to an enhancement in the accuracy of the video retrieval system. The Bi-DTW algorithm computes both \(DTW(v_{1},v_{2})\) and \(DTW(\text{reverse}(v1),\text{reverse}(v2))\) where \(\text{reverse}(v)\) is the sequence of clips in the reverse order. The output of Bi-DTW algorithm is \(min(DTW(v1,v2),DTW(\text{reverse}(v1),\text{reverse}(v2)))\). Hence, Bi-DTW, with its dual-directional operation, not only enables us to determine the degree of similarity between two videos more effectively, but also enhances the ranking process of candidate videos. By comparing relevance from both the original and reverse sequences of the query video, it ensures a more comprehensive matching, thereby providing a superior and more versatile solution for the video retrieval task.
## V Numerical Experiments
In this section, we present experimental results, which demonstrate the effectiveness of the proposed method in terms of retrieval accuracy and computational efficiency when compared to the state-of-the-art methods.
### _Datasets_
We have conducted numerical experiments using the CCWebVideo, YouTube-8M-sub, s3DIS, and Synthia-SF public datasets to evaluate the performance of the proposed model. The CCWebVideo dataset includes 12,790 videos and 27% near-duplicates. The YouTube-8M dataset is a large-scale video dataset which includes more than 7 million videos with 4,716 classes. The YouTube-8m-sub dataset we use is a random subset of YouTube-8m with around 100 videos from each class of YouTube-8m datasets. The Stanford 3D Indoor Scene (S3DIS) dataset contains 6 large-scale indoor areas with 271 rooms. Each point in the scene point cloud is annotated with one of the 13 semantic classes. The Synthia dataset is a synthetic dataset that consists of 9,400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Synthia-SF is a subset of Synthia that only covers San Francisco. Table II summarizes the number of samples in each dataset. For the 3D datasets, we concatenate the depth information as an additional input dimension to the proposed 3D video auto-encoder.
Fig. 6: Illustration of a triplet loss in the proposed framework.
### _Implementation_
We implemented the model using the Tensorflow 1.15 framework, and trained it on NVIDIA 3070 GPUs or equivalents. The model was trained with minibatch sizes of 32 and 8 clips for the 2D and 3D datasets respectively, using 4 GPUs in parallel. We employed the softmax loss function for the auto-encoder and applied L2 regularization of penalty ratio of 0.001 to the model's trainable parameters. These parameters were initialized using Xavier initialization.
The model was optimized using stochastic gradient descent (SGD) with an initial learning rate of 0.001 which decayed by 10 times every 10 epochs. Training ran for 50 epochs with early stopping after 5 epochs of no improvement. L2 regularization of 0.001 and momentum of 0.9 were used.
For the seq2seq auto-encoder, sequence-wise normalization was applied across multiple video sequences within each mini-batch. We computed the mean and variance statistics across all timesteps within this mini-batch for each output channel. Activation functions are ReLU.
For the encoder, it contains 3 LRBP layers and a dense layer. Each LRBP block took an input tensor of size 256x256x3. The BidirectionalConvLSTM layer employed a 3x3 kernel, a stride of 1, and had 64 hidden states. The subsequent ResConvLSTM used the same kernel size, padding, and stride but contained 32 hidden states. The output tensor after the residual connection was of size 256x256x96, which was then reduced to 128x128x16 after pooling. The output tensor was further reduced to 32x32x16 after the third LRBP block. This tensor was mapped to a 4000-entry embedding vector by a dense layer. In the Quasi 4D CNN, we used a 3x3x3 kernel. The transformer was applied to the embedding vectors, and consisted of 5 self-attention layers each with 3 attention heads. The transformer's hidden size was 512, and its intermediate size was 2048.
For the URB decoder, it contains three URB layers. The embedding vector was first transformed by a dense layer to 32x32x16 before entering the first URB block. Each URB block's ResConvLSTM utilized a 3x3 kernel, a stride of 1, and had 32 hidden states. The final ConvLSTM layer in the decoder had 3 filters, producing an output tensor of size 256x256x3.
The raw videos varied in length and resolution. We normalized them to 30 FPS, resized to 256 height keeping aspect ratio, cropped the center 256x256 region to get square RGB frames, and standardized the values to have 0 mean and unit variance across all videos. This standardized the data in terms of frame size, rate, and value ranges to effectively train deep learning models on the dataset.
### _Benchmarks_
For the 2D video retrieval experiments, we compare the proposed models with a non-deep learning-based baseline method, which uses Scale Invariant Feature Transform (SIFT) to extract features from each frame in a clip and measures similarity between query clips and candidate clips using a bag of words method. Additionally, we also compare the proposed models to the state-of-the-art deep learning-based benchmark algorithm, NDVR-DML [10], which utilizes a Convolutional Neural Network and a Deep Metric Learning (DML) framework to generate discriminative global video representations and approximates an embedding function for accurate distance calculation between near-duplicate videos.
For the 3D video retrieval experiments, we use an autoencoder as the benchmark model, with 3D CNN layers as the encoder and decoder. This 3D autoencoder is a deep learning-based model that effectively extracts important features from the 3D video data and converts it into a compact and informative embedding for similar video retrieval. The architecture of this 3D autoencoder is similar to the proposed 3D models, with the main difference being the use of 3D CNN layers as opposed to the 3D ConvLSTM cell, Quasi 4D CNN layer, or transformer unit used in the proposed models.
### _Evaluation Metrics_
For evaluation, we use Mean Average Precision (mAP) to measure the quality of the retrieval results. Measure mAP is a commonly used evaluation metric in information retrieval. It calculates the average precision of the retrieved results for a given query and is based on a binary relevance scale of 0 (non-relevant) or 1 (relevant).
The mean average precision is formally stated as \(mAP=-\frac{1}{n}\sum_{i=0}^{n}\frac{i}{r_{i}}\), where \(n\) is the number of relevant videos to the query video, and \(r_{i}\) is the rank of the \(i\)th retrieved relevant video [6].
In order to evaluate the effectiveness of the proposed video retrieval methods, we carefully constructed a test set by randomly cropping the original training videos and concatenating the new clips. The test queries were formed by randomly selecting a set of frames from a video in the training set, with the original video serving as the query's ground truth. To ensure that the test queries differed from the database records, we designed the test set generation process to include consecutive frames and gaps between frames in the queries. This method of test set generation allows us to simulate real-world scenarios where the input query may be similar but not identical to the database records. The evaluation criteria include successful retrieval by clip and class, with the latter being a more stringent measure of performance.
### _Process_
Training contains four steps, as described in Algorithm 1, which optimizes an autoencoder-based video retrieval model's performance. This schema includes pre-training the autoencoder on a large video dataset, training with a triplet-loss on a multiset derived from the training set, computing the similarity measure, and finally, fine-tuning with a second round of triplet-loss training on a mix of challenging and general training sets.
For inference, the dataset is split 70/15/15 into train, validation and test sets. Model efficacy is evaluated under two scenarios: by class and by clip. For class-level evaluation, validation clips are encoded into embeddings and compared
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & & No. of & No. of & Avo. \\ Dataset & Type & classes & videos & video \\ & & & & length \\ \hline CCWEVVideo & 2D & 24 & 533 & 265s \\ YouTube-8m-sub & 2D & 1000 & 100 & 3000s \\ S3DIS & 3D & 6 & 1600 & 251s \\ Synthia-SF & 3D & 6 & 417 & 835 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Descriptions for all four datasets.
against train embeddings. If retrieved training videos match the class of the query, precision is 1. For clip-level, precision is 1 only if the matched video contains the exact query clip. This matching process is repeated for all validation set videos and compared to the training set to compute metrics like mean average precision.
## VI Results
In this section, we delve into the experimental results of our proposed models applied to 2D and 3D datasets, comparing their performances against various benchmarks. We use the metric of mean Average Precision to assess the models' effectiveness in video clip retrieval both by clip and class. Furthermore, a series of ablation studies help us understand the impact of different training techniques, loss functions, and the dynamic time warping method on the performance of our proposed models.
### _Results for 2D Datasets_
We report the numerical results as mAP for the 2D datasets in Table III. It shows that the proposed model has superior performance on the 2D datasets regarding the video clip retrieval accuracy by clip and by class.
From Table III, it can be seen that the proposed models M1, M2, and M3 all perform better than the non-deep learning-based SIFT baseline model, with M3 achieving the highest mean average precision across both datasets and both aggregation methods. Additionally, the models outperform the deep learning-based NDVR-DML benchmark model, with M3 achieving comparable or better results on all metrics. Overall, these results demonstrate the effectiveness of the proposed video retrieval models and their ability to achieve high levels of performance on various datasets and evaluation metrics.
For the CCWebVideo dataset using model M3, 14.8% of validation samples failed class-level retrieval. Upon inspection, around 32% of these failures retrieved videos of the wrong class but with visual similarity. For example, Fig 7 shows a comedy clip incorrectly matched to the music class. As evident in Figure 8, comedy and music clips can appear visually similar despite different semantic classes. This implies the encoder failed to sufficiently differentiate some inter-class nuances. Introducing the bidirectional dynamic time warping (Bi-DTW) method significantly boosted retrieval accuracy for certain classes. Animation clips improved from 92% to 98% class-level mAP, owing to Bi-DTW better handling symmetric motions of characters, as exemplified in Fig 9.
Overall, the analysis indicates room for improvement in handling inter-class visual similarities. Bi-DTW conferred notable gains for select classes by leveraging bidirectional temporal matching. Further inspection of embedding projections and retrieval results could provide additional insight into model limitations. Targeted sampling and training techniques may help differentiate challenging inter-class pairs.
### _Results for 3D Datasets_
We report the numerical results as mAP for the 3D datasets in Table IV. It shows that the proposed model has superior performance on both 3D datasets about the video clip retrieval accuracy by clip and class.
According to Table IV, the performance of the proposed models varies across the two datasets and the different levels of aggregation. For example, on the S3DIS dataset, M1-3D and M3 perform similarly well in class-level mAP, with 61.4% and 61.1%, respectively, while M3 performs slightly better in terms of clip-level mAP with 78.1%. On the other hand, on the Synthia-SF dataset, M3 performs slightly better than the other proposed models in both class-level mAP and clip-level mAP.
The proposed models have not consistently outperformed the 3D autoencoder, except the model M3 is always better or on par with the baseline model. In addition, the table shows that the performance of the models is different when evaluated by class level or clip level. Figure 10 shows the training loss when we train the model with the triplet loss on the Synthia dataset. Based on these results, we conclude that the models have great potential to improve the video retrieval results on 3D datasets and that it is important to consider the level of aggregation when evaluating the performance.
### _Ablation study: Effects of pretraining and challenging sample retraining_
We investigate the impact of two key training techniques on the performance of the proposed deep-learning architecture.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & \multicolumn{2}{c}{S3DIS} & \multicolumn{2}{c}{Synthia-SF} \\ & by class & by clip & by class & by clip \\ \hline
3D Autoencoder & 58.9\% & 77.9\% & 54.5\% & 82.5\% \\ M1-3D & 61.4\% & 75.4\% & **55.6\%** & 82.1\% \\ M2-3D & 62.3\% & 73.3\% & 54.8\% & 81.9\% \\ M3-3D & 61.1\% & **78.1\%** & 54.5\% & **83.2\%** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: mAP results for 3D datasets
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Model & \multicolumn{2}{c}{CCWebVideo} & \multicolumn{2}{c}{Youtube-8m-sub} \\ & by class & by clip & by class & by clip \\ \hline SIFT & 58.8\% & 64.7\% & 27.9\% & 38.9\% \\ NDVR-DML & 84.4\% & 91.7\% & **64.8\%** & 75.2\% \\ M1 & 79.6\% & 85.3\% & 62.3\% & 71.8\% \\ M2 & 81.7\% & 90.4\% & 65.4\% & 74.9\% \\ M3 & **85.2\%** & **92.1\%** & **64.8\%** & **76.8\%** \\ \hline \hline \end{tabular}
\end{table} TABLE III: mAP results for 2D datasets
Fig. 8: Typical clip from CCWebVideo Music class.
Fig. 7: Clip of Comedy class from CCWebVideo that is matched to Music class.
Fig. 9: Example clip of Animation class from CCWebVideo.
Specifically, we study the effects of pretraining on a large and diverse dataset and challenging sample retraining on the accuracy of similar video retrieval. Firstly, we found that pretraining on a diverse dataset such as YouTube-8m-sub is crucial for improving the model's generalization ability and effectively learning from the given training set despite the complexity and intricacies of the video data. This required a significant investment of computation resources, with an approximate 24-hour pretraining period using 4 GPUs.
To validate the effectiveness of these techniques, we present an ablation study in Table V where we evaluate the performance of different variations of the baseline model, M3, on the CC_WEB_VIDEO dataset. The results show that the overall performance of M3 improves significantly when incorporating pretraining and training with challenging samples. Specifically, the model trained with pretraining and challenging samples in the second round achieved the best performance with 85.2% mAP by class and 92.1% mAP by clip. This suggests that pretraining and focusing on difficult samples during training can greatly enhance the model's performance for similar video retrieval.
Secondly, For the Synthia 3D dataset, Figure 11 shows an example query not improved by Bi-DTW, incorrectly matched to Figure 12 of a different class. Although many Synthia classes depend on filming time or weather, this failure stems from visual similarity of the street scenes. The Bi-DTW method may incorrectly relate clips appearing to drive in reverse on the same road despite different classes. Visually analogous roads filmed in opposing directions can be erroneously aligned by Bi-DTW. Further inspection into geometrically similar backgrounds causing inter-class confusion is needed. Additional constraints or tweaks to Bi-DTW's matching function may help mitigate false alignments from symmetric or reversed scenarios. Targeted data augmentation and sampling techniques could also help differentiate such challenging cases.
### _Ablation study: Effectiveness of triplet loss_
In this ablation study, shown as Figure VI, we evaluated the performance of different variations of the proposed model, M3, on the CC_WEB_VIDEO dataset. The results, as shown in the table, indicate that the incorporation of the triplet loss significantly improves the model's performance, achieving 54.1% mAP by class and 65.7% mAP by clip. Moreover, when pretraining is added to the model, the performance is further improved, achieving 84.5% mAP by class and 89.2% mAP by clip. These results suggest that the triplet loss is crucial for a similar video retrieval task, and pretraining enhances the model's generalization ability.
### _Ablation study: Effectiveness of bi-directional DTW_
In this ablation study, we evaluated the performance of the proposed model, M3, with two different dynamic time warping (DTW) methods: vanilla DTW and bi-directional DTW (Bi-DTW). The results presented in the table show that the overall performance of M3 improves significantly when utilizing the Bi-DTW method. Specifically, the model trained with Bi-DTW achieves 84.5% accuracy by class and 89.2% accuracy by clip level, whereas the model trained with vanilla DTW only achieves 72.1% accuracy by class and 85.4% accuracy by clip. This suggests that utilizing the Bi-DTW method dramatically enhances the model's performance for similar video retrieval.
Reflecting on the findings of our ablation studies, it becomes evident that the choice of methods hinges upon the specific scenario. For tasks necessitating precision and generalization, particularly in complex and intricate video data, it is recommended to incorporate pretraining and challenging sample retraining. These techniques significantly boost performance
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & \multicolumn{2}{c}{CC\_WEB\_VIDEO} \\ & by class & by clip \\ \hline M3 & 54.1\% & 65.7\% \\ M3 + 2nd pass & 65.5\% & 71.3\% \\ M3 + challenging 2nd pass & 69.3\% & 78.5\% \\ M3 + pre-TRAIN & 84.5\% & 89.2\% \\ M3 + pre-TRAIN & & \\ and challenging 2nd pass & **85.2\%** & **92.1\%** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Effectiveness of pertaining and challenging sample retraining on CC_WEB_VIDEO dataset.
Fig. 11: An example clip from Synthia.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & \multicolumn{2}{c}{CC\_WEB\_VIDEO} \\ & by class & by clip \\ \hline M3 autoencoder & 17.6\% & 32.2\% \\ M3 autoencoder + triplet loss & 54.1\% & 65.7\% \\ M3 autoencoder + pre-train & 29.6\% & 35.1\% \\ M3 autoencoder + triplet loss + pre-train & **84.5\%** & **89.2\%** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: effectiveness of triplet loss.
Fig. 12: A clip of same location and different weather from Synthia.
Fig. 10: Training loss curve for proposed models
but require substantial computational resources, indicating their appropriateness in scenarios where such resources are available and accuracy is paramount. Meanwhile, the use of triplet loss has been proven to be effective across tasks, delivering substantial performance improvements. However, for tasks where even more enhanced results are sought, it is beneficial to couple the triplet loss technique with pretraining. When dealing with dynamic time warping, Bi-DTW stands out as a superior choice over the vanilla DTW, especially when striving for accuracy in similar video retrieval, making it the go-to method in such contexts.
## VII Conclusion
In this study, we introduced a bi-directional dynamic time-warping technique for video information retrieval and innovative strategies for 3D video inquiries, including a novel 3D network architecture and a method to manage 3D video data with added depth. Our models, especially M3, demonstrated superior performance on both 2D and 3D datasets, outpacing both non-deep and deep learning baselines. When evaluating performance, it's essential to consider the aggregation level and the nature of the dataset. Moreover, incorporating pretraining, challenging second round retraining, and our proposed time-warping technique is crucial for optimal performance. Our methods not only offer advanced solutions for video retrieval but also pave the way for improved 3D video data processing, presenting a promising direction for future research.
|
2308.13028 | Training Neural Networks with Universal Adiabatic Quantum Computing | The training of neural networks (NNs) is a computationally intensive task
requiring significant time and resources. This paper presents a novel approach
to NN training using Adiabatic Quantum Computing (AQC), a paradigm that
leverages the principles of adiabatic evolution to solve optimisation problems.
We propose a universal AQC method that can be implemented on gate quantum
computers, allowing for a broad range of Hamiltonians and thus enabling the
training of expressive neural networks. We apply this approach to various
neural networks with continuous, discrete, and binary weights. Our results
indicate that AQC can very efficiently find the global minimum of the loss
function, offering a promising alternative to classical training methods. | Steve Abel, Juan Carlos Criado, Michael Spannowsky | 2023-08-24T18:51:50Z | http://arxiv.org/abs/2308.13028v1 | # Training Neural Networks with Universal Adiabatic Quantum Computing
###### Abstract
The training of neural networks (NNs) is a computationally intensive task requiring significant time and resources. This paper presents a novel approach to NN training using Adiabatic Quantum Computing (AQC), a paradigm that leverages the principles of adiabatic evolution to solve optimization problems. We propose a universal AQC method that can be implemented on gate quantum computers, allowing for a broad range of Hamiltonians and thus enabling the training of expressive neural networks. We apply this approach to various neural networks with continuous, discrete, and binary weights. Our results indicate that AQC can very efficiently find the global minimum of the loss function, offering a promising alternative to classical training methods.
IPPP/23/46, CERN-TH-2023-162
## 1 Introduction
Adiabatic quantum computing (AQC) is a paradigm of quantum computation that harnesses the principle of adiabatic evolution to solve computational problems [1, 2, 3]. In this approach, the quantum system is initialized in the ground state of a simple Hamiltonian. The system is then evolved adiabatically, ensuring that it remains in its ground state towards a final Hamiltonian which encodes the solution to the problem. The adiabatic theorem guarantees that if the evolution is sufficiently slow, the system will remain in the ground state throughout the process. The computational prowess of AQC is equivalent to that of the conventional quantum computation model, implying that both models are polynomially equivalent [4]: in other words it is considered to be a universal quantum computing paradigm. Moreover, AQC has been realized experimentally in various systems, including solid-state single-spin systems under ambient conditions [5, 6]. The purpose of this paper is to demonstrate how AQC can be used to greatly enhance Neural Networks.
Generally, NNs, like all self-adaptive optimisation algorithms, consist of three core parts:
1. A system that encodes a complex function,
2. An output layer's loss function that dictates the NN's task,
3. A training method to minimize the loss function.
It is the last of these three, namely the training of NNs, which typically demands the greatest time, effort and resource, and which poses the greatest challenge to their development and deployment.
In previous exploratory work [7, 8, 9] we showed that a NN can be trained by encoding it in a transverse Ising model on a quantum annealer [10]. That work demonstrated that such an approach, utilising quantum tunnelling, can train a NN optimally, reliably and quickly. Furthermore, the trained parameters can be extracted and used in a classical network for deployment. However, the restriction to a transverse Ising model as the Hamiltonian for quantum annealing greatly limits the expressivity of the NN.
Thus, to address these obstacles, this paper proposes a universal AQC approach that can be
used to train a neural network and extract the optimally trained network weights. The much wider variety of Hamiltonians that can be used within the universal AQC paradigm allows us to include correlations and non-linearities in our models, allowing adiabatic quantum training to be applied to larger and more expressive networks.
We will present two techniques for performing the AQC, the "matrix method" in which the system is expressed in terms of truncated Hilbert space components, and the "Pauli-spin method" in which it is expressed directly with Pauli-spin matrices. We apply these methods to simulated quantum-gate computers, showing the applicability of AQC training on near-term devices. Furthermore, we apply the "Pauli-spin method" to the training of both continuous neural networks, and networks with discrete and binary weights. The latter usually rely on non-gradient-based optimisation algorithms and are classically very difficult to treat.
In the burgeoning domain of computational intelligence, neural networks are heralded as the cornerstone of machine learning, particularly excelling in classification and regression tasks. Their influence permeates both everyday applications and advanced scientific research. Hence, being able to enhance their capabilities and streamline their training through the innovative lens of quantum computing is of considerable significance.
## 2 Challenges in Training Neural Networks
It is useful to begin our discussion with a brief appraisal of the difficulties one may encounter when training a neural network. In the training phase the goal is to reach the global minimum of the so-called cost or loss function. However, the optimization landscape of neural networks often contains multiple local minima. This problem is only exacerbated if the network is a small one. Broadly speaking, in a space of high dimensionality most critical points are likely to be saddle points. Thus a gradient descent method is usually effective. Conversely on small neural networks finding the global minimum can be much more difficult. Indeed several other issues can arise during training, even with correct algorithm implementation. Here, we briefly list these challenges and the typical approach that is employed to deal with them in classical training:
* Slow Progress, Fluctuations or Instability: tackled by optimising the learning rate to either speed up slow down convergence.
* Badly Conditioned Curvature: "ravines" in the landscape imply that different directions need different learning rates to be optimal. The Adam algorithm can address this by adapting learning rates individually for each parameter.
* Local Optima: addressed by using random restarting points to explore every basin of attraction.
* Weight degeneracy: addressed by a random initialization of the weights and biases, which breaks the symmetry.
* Dead and Saturated Units: activations at the ends of their range cause plateaus in the loss-function landscape. Initializing biases with positive values can help avoid the problem, although it can also signal a redundancy in the network that one would like to reduce by pruning out redundant weights.
It is worth emphasising that the paradigm of neural networks uses a set of _continuous_ weights and biases on which a gradient descent can be performed. However, arguably this causes great redundancy because, in many situations, a reasonable solution to the optimisation of the network is, in principle, achievable with weights that are discrete or even binary (i.e. just "on" or "off") if only we can find the correct discrete values.
To appreciate the redundancy that is inherent in continuous weights and biases consider the example of a classification task when there are only two features. In principle, the classification curve can be written as the Taylor expansion of the level-curve of some function \(z(x_{1},x_{2})\) of the features \(x_{1}\) and \(x_{2}\). However, if this classification curve happens to be well approximated by a quadratic function for example, then it would require only six continuous coefficients. In contrast, the neural network would typically have many more continuous weights and biases. Conversely, if we accept that these six continuous
Taylor coefficients are well approximated if we know them to four binary places (i.e. to one part in 16) then only 24 binary weights taking values of 0 or 1 should _in principle_ be able to describe the same classification curve. Because of this redundancy, there is indeed quite some interest in training discretely weighted networks, and networks where both the weights and activation functions are binary [11, 12, 13, 14, 15].
However, we can immediately appreciate that such a system of discrete weights is classically problematic precisely because it runs into both the "weight degeneracy" and the "dead and saturated units" problems mentioned in our list of challenges. Moreover the "local optima" problem is generic. Indeed it is for these reasons that the classical training of discretely weighted and binary systems requires special treatment [12, 13].
## 3 Adiabatic Quantum Computing on Gate Quantum Computers
Let us now therefore turn to AQC, beginning with a discussion of its general implementation on gate quantum computers.
Although it is interesting for the reasons outlined above to allow our eventual systems of interest to be relatively discrete in nature, it will be useful in establishing the basic principles first to consider systems of function of continuous variables. Thus in this section, we shall focus on the specific task of finding all the global minima of a function \(V(w)\) of one variable in the interval \(w\in[0,1]\). (To remind ourselves that ultimately we will be concerned with weights and biases, we call the variable \(w\).) AQC is equivalent in this context to solving for the ground state in one-dimensional quantum mechanics with \(w\) corresponding to the single space dimension. Studying such familiar cases will allow us to confirm that our system behaves as expected.
The first example we will look at is the following cosine potential which has two degenerate minima in the interval \(w\in[0,1]\):
\[V(w)\ =\ 1+\cos(4\pi w)\, \tag{1}\]
which appears as the dashed line in Fig. 1.
There are various ways in which one might wish to encode the problem of minimising this potential. It is necessary to ensure that the chosen method is both effective and yields an advantage (in the sense that the difficulty does not scale exponentially with the problem size). Here we shall consider two encoding methods, the "matrix method" and the "Pauli-spin method".
### The Matrix Method
The Matrix method is the most direct: it entails evolving the wavefunction from some starting state using the Schrodinger Hamiltonian,
\[\hat{H}\ =\ \frac{\hat{p}^{2}}{2m}+V(\hat{w})\ \, \tag{2}\]
in its matrix form in a truncated Hilbert space. We proceed as follows. We adopt periodic boundary conditions, and define a basis of eigenstates of the kinetic piece in the Hamiltonian working in the \(w\)-basis:
\[\langle w|n\rangle\ =\ e^{2\pi inw}. \tag{3}\]
The Hamiltonian matrix is then given by
\[H_{n\ell} = \int_{0}^{1}\langle n|w\rangle\langle w|\hat{H}|\ell\rangle dw \tag{4}\] \[= \frac{4\pi^{2}n^{2}}{2m}+\widetilde{V}(n-\ell)\ \,\]
where \(\widetilde{V}(n)=\int_{0}^{1}V(w)e^{-2\pi inw}dw\) is the Fourier transform of \(V(w)\) which we can easily calculate for any \(n,\ell\). Thus we can in principle simply take the resulting matrix \(H_{n\ell}\), and use it to evolve the wavefunction \(\psi(w,t)\) from an initial state \(\psi(w,0)=c_{n}(0)\langle w|n\rangle\), using the Trotterized Schodinger evolution,
\[c_{n}(t) = e^{-iH_{n\ell}t}c_{\ell}(0) \tag{5}\] \[\approx \left(e^{-iH_{n\ell}\delta t}\right)^{t/\delta t}c_{\ell}(0)\.\]
Up to this point, everything is simple quantum mechanics. However, we wish to encode the wavefunction and its evolution in terms of qubits. This can be done by truncating the Hilbert space to size \(2^{N}\) with \(n\in[-2^{N-1},2^{N-1}]\). This allows us to identify each index \(n\) with one of the \(2^{N}\) possible eigenvalues of \(N\) tensored qubits. The simplest choice for this identification is to treat \(n\) like the computational-basis index: that is we associate the binary expression for each \(n\) with the eigenvalues of the \(N\) tensored binary operators
\[T=\frac{1}{2}(1+Z)\, \tag{6}\]
where \(Z\) is the Pauli \(Z\)-spin matrix for each qubit. Thus for example \(|n=-2^{N-1}\rangle\equiv|000\ldots 000\rangle\), \(|n=0\rangle\equiv|000\ldots 010\rangle\), \(|n=3\rangle\equiv|110\ldots 010\rangle\) and so forth.
To perform the time evolution, our \(2^{N}\times 2^{N}\) Hamiltonian matrix must then be accordingly decomposed into sums of tensor products of the Pauli-spin matrices, which act on the \(N\) tensored qubits, and then the evolution of the initial state Trotterized as above. To implement this step in the process, here and throughout, we will make extensive use of the qibo package of programmes which allows fast evaluation of quantum circuits taking full advantage of hardware accelerators [16, 17, 18, 19]. This package allows one to automate the decomposition step and implement the Trotterized time evolution induced by a symbolic Hamiltonian defined in terms of Pauli-spins, which is rendered as a quantum gate circuit. Moreover, simulation is feasible up to an order of 25 qubits.
As a warm-up exercise, it is interesting to consider an initial wavefunction localised in one of the minima and observe it tunnel to the other degenerate minimum. This is shown for the potential of Eq. (1) in the first panel in Fig. 1, where for the initial state, we choose a Gaussian localised in the left minimum. We perform the time evolution as a simulation using qibo's "StateEvolution" module, which, as we said, produces and evolves the circuit corresponding to the symbolic Hamiltonian. (Importantly qibo allows one to put the same Trotter evolution directly onto a real machine.)
The wavefunction indeed tunnels to the second minimum, as expected. However, in this initial example, we can also see why quantum tunnelling _per se_ is not always beneficial for locating global minima. There is no energetic dissipation in an idealised setting, so the initial wave function never stops moving unless it is already in an energy eigenstate. It would, for example, be very hard to determine the global minimum if the minima were only slightly non-degenerate. This can be contrasted with dissipative systems such as those utilised in quantum annealers in Refs. [20, 7, 21, 22].
Hence to determine the true global minimum, we can use AQC as envisaged in Ref. [23, 2, 3]. That is, we begin the system in the ground state of a trivial Hamiltonian \(\hat{H}_{0}\) and adiabatically evolve the system to the complicated Hamiltonian of interest, \(\hat{H}\). As a function of time, the total Hamiltonian \(\hat{H}_{A}\) for adiabatic evolution in the AQC paradigm takes the form
\[\hat{H}_{A}(t)\ =\ (1-s(t))\,\hat{H}_{0}+s(t)\,\hat{H}\, \tag{7}\]
where \(s(t)\) is the so-called schedule function with \(s(0)=0\) and \(s(t_{\text{final}})=1\). If the evolution is sufficiently adiabatic, the system will always remain in the ground state. The result is the desired ground state of the complicated Hamiltonian of interest. For the present example, we
Figure 1: Tunnelling versus adiabatic evolution in the cosine potential, \(V(w)=1+\cos(4\pi w)\), with a truncation at energy level \(\langle w|n\rangle=e^{2\pi nwi}\) where \(n\in[-15,15]\). In the tunnelling example we take \(m=10\), while in the second adiabatic example we have taken masses of \(m=100\) (equivalently \(V\) can be multiplied by \(100\)) in order to get well localized peaks in the ground state. The evolution time must be increased accordingly. The schedule function is taken to be simply linear, \(s(t)=t\).
can take \(\hat{H}_{0}\) to be the purely kinetic Hamiltonian with \(V=0\), for which the \(n=0\) state, \(\langle w|\psi\rangle=\langle w|000\ldots 010\rangle=1\), is trivially the groundstate solution.
We performed the adiabatic evolution within qibo using models.AdiabaticEvolution, with, for simplicity, the schedule function taken to be linear, \(s=t/t_{\rm final}\). The resulting evolution is shown in the second panel of Fig. 1. Notably, the complicated Hamiltonian's eventual groundstate function is time-independent as it should be and correctly responds to the two minima degenerately. Thus, for locating the global minima, the mass (or, more generally, the kinetic to potential terms ratio in \(\hat{H}\)) plays an important role. The higher the mass is relative to \(V(w)\), the sharper the peak around the global minima. This is, of course, to be expected because approximating the potential around each minimum, \(w_{\rm min}\), as a simple harmonic oscillator (SHO), the wavefunction is of the form
\[\rho_{0}(w)=|\psi_{0}|^{2}\ \approx\ \left(m\right)^{1/4}e^{-4\pi\sqrt{m}(w-w_{ \rm min})^{2}}\.\]
We show this dependence explicitly in Fig 2 which displays the expected \(m^{1/4}\) behaviour in the amplitude of the peaks. This feature will be important in later discussions.
It is instructive and useful for our later discussion to perform the same kind of comparison in a polynomial potential with a metastable minimum where the system can be trapped. A simple case is the following quartic potential:
\[V(w)\ =\ \lambda(18w^{4}-35w^{3}+22w^{2}-5w+0.372573)\;, \tag{8}\]
where we keep \(\lambda\) as an overall factor to scale the potential. The potential is shown as the grey dashed line in Fig. 4. To examine tunnelling, we begin the system in the approximate ground state of the metastable minimum at \(w_{+}=0.1848\). Expanding around this point, we find an approximate SHO potential with \(V(w)\approx\lambda(0.372573+2\pi(w-w_{+})^{2})\) (and hence SHO parameters \(m\Omega=2\sqrt{\lambda\pi m}\)). Thus to demonstrate tunnelling, we begin the system in the Gaussian groundstate,
\[\psi_{0}(w)\ =\ \left(4m/\pi\right)^{1/8}e^{-\sqrt{\lambda\pi m}(w-w_{\rm min })^{2}}\.\]
The subsequent evolution is shown in the first panel of Fig. 4. Again we see that tunnelling does not help locate minima without some element of dissipation. Indeed the wavefunction either oscillates wildly between the minima on longer timescales or remains relatively stuck: it is quite challenging to control the behaviour, which depends sensitively on the choice of both \(\lambda\) and \(m\). This can be contrasted with AQC which correctly reproduces the ground state in the second panel. This only selects the true global minimum even when the two minima are almost degenerate. As an example of the latter we show in Fig. 3 the evolution for the cosine potential (done using the "matrix method") with a tiny linear term \(\Delta V(w)=\epsilon w\), where \(\epsilon=0.02\), which causes non-degeneracy in the two minima. Even though the two minima are imperceptibly non-degenerate, the adiabatic process ultimately finds the true global minimum. The behaviour is quite striking because it is initially degenerate, and only towards the end of the process selects the true minimum. As for the cosine potential, the global minimum can be more precisely located by increasing
Figure 3: AQC for the exact same system as in Fig 1 but with not quite degenerate minima (the energetic difference between the two minima being \(\Delta V_{\rm min}\approx 0.01\)). During the evolution the ground state ultimately selects the true minimum provided the process remains adiabatic.
Figure 2: The effect of mass on the ground state. Around each minimum, the groundstate approximates the Gaussian groundstate of the SHO with \(V(w)=8\pi^{2}w^{2}\), namely \(\rho_{0}(w)=|\psi_{0}|^{2}\approx(m)^{1/4}\,e^{-4\pi\sqrt{m}(w-w_{\rm min})^{2}}\) (normalized such that each of the two peaks contributes \(1/2\)).
the mass or increasing the parameter \(\lambda\), subject to the constraint that the Trotterization should remain a good approximation (i.e. \(|H|\delta t\ll 1\)). However in the present context the most important aspect of this example is that we can see that AQC completely avoids the "Badly Conditioned Curvature" problem mentioned in our list of challenges in Section 2.
We should, for completeness, attach a caveat to this picture: the oscillation back and forth that we can observe in the tunnelling solutions is partly due to the fact that the systems we consider in these illustrative examples are only one-dimensional and periodic. Quantum tunnelling in many physical systems of interest (for example, phase transitions in cosmology) would be higher dimensional and take place in non-compact volumes. In such situations the tunnelling process is one-way because there is a large degenerate volume of global minima: excess energy after tunnelling is dissipated in dynamics, for example, in accelerating bubble walls.
### The Pauli-Spin Method
Despite the straightforward nature of the matrix method for adiabatic evolution, it is not the most convenient approach for NN training because the matrix \(H_{\ell n}\) would grow exponentially with the number of variables (i.e. weights) in the system, due to us being required to store a \(2^{N}\times 2^{N}\) matrix in general. Instead, it is typically more efficient (we will make a more detailed comparison of the relative efficiencies later in Subsection 4.4) to use the "Pauli-spin method": in this method, the variables and hence the Hamiltonian are encoded in a binary fashion in the eigenvalues of Pauli-spins.
That is, we assign bin values for the variables themselves instead of the wave function by defining the binary \(T\) operators as in Eq. (6). For example, in the single variable case, we encode \(w\) discretely as a fractional binary composed of \(N\) of the binary spins, \(T_{\ell}\). Hence the operator corresponding to \(w\) is
\[\hat{w}\ =\ 2^{-N}\sum_{\ell=0}^{N-1}2^{\ell}T_{\ell}. \tag{9}\]
The above encoding yields binned values for possible measurements of the variable, \(\langle\hat{w}\rangle\in\{w_{r}\}=\{0,\frac{1}{2^{N}},\frac{2^{N}}{2^{N}} \ldots,1-\frac{1}{2^{N}}\}\). Thus any particular state \(|\psi\rangle\) is defined as
\[|\psi\rangle\ =\ \sum_{r}|w_{r}\rangle\langle w_{r}|\psi\rangle\, \tag{10}\]
with \(r=0\ldots 2^{N-1}\) labelling the possible bin values \(w_{r}\), and with \(\rho(w_{r})=|\langle w_{r}|\psi\rangle|^{2}\) yielding the probability for measuring the state in that particular bin. Essentially this replaces the momentum truncation with a direct variable discretisation.
This is the general structure for encoding variables. How should we now go about constructing the adiabatic evolution? For the target Hamiltonian \(\hat{H}\), the main aspect to note is that in this discretised variable formulation of the problem, the momentum and hence the kinetic \(\hat{p}^{2}/2m\) terms would be hard to encode (such terms would have to be encoded by the finite difference which would greatly complicate the Hamiltonian). However, we also note that the kinetic terms in the Hamiltonian did not serve much purpose in locating the global minimum of \(V(w)\) anyway: all they do is provide spread in the profile of the eventual ground state. Indeed from Fig. 2, it is clear that if we were to take the limit \(m\to\infty\) keeping \(V(w)\) unchanged, then the final wavefunction would be a spike at the global minimum, which would for opti
Figure 4: Tunnelling versus adiabatically evolving the ground state in a quartic potential. Here for tunnelling the initial wavefunction is chosen to be the groundstate of the approximate SHO potential around the false minimum (with \(\lambda=4,\ m=100\)). For the adiabatic evolution we take \(\lambda=8,m=200\) to ensure a localized peak at the origin. We also show (overlaid dotted line) the groundstate of the SHO approximation obtained by expanding around the global minimum at \(w=0.8\).
misation be virtually the ideal outcome. Thus, to determine the global minimum of a potential \(V(w)\), we may delete the kinetic terms and set
\[\hat{H}\ =\ V(\hat{w})\, \tag{11}\]
where now the operator \(\hat{w}\) is to be replaced by its encoding in terms of \(Z_{\ell}\) spins given in Eq. (9). Note that, unlike the matrix approach, we are now constrained to consider polynomial potentials. Moreover, a modest amount of reduction can be performed on the Hamiltonian. For example, upon expanding the polynomial \(\hat{V}\), we may find powers of Pauli matrices that can be reduced using \(T_{\ell}\varGamma_{\ell}=T_{\ell}\). (Such reduction is more significant when fewer qubits are used to define each \(\hat{w}\)).
To play the role of the trivial Hamiltonian in the adiabatic evolution, \(\hat{H}_{0}\), we can use the commonly adopted transverse AQC choice,
\[\hat{H}_{0}\ =\ \frac{1}{2}\sum_{\ell=0}^{N-1}\left(\mathbb{1}-X_{\ell}\right)\, \tag{12}\]
where \(X_{\ell}\) is the \(X\) Pauli-spin matrix for the \(\ell\)'th qubit. It is easy to see that \(\frac{1}{2^{8/2}}\prod_{\ell}(|0\rangle_{\ell}+|1\rangle_{\ell})\) is the ground-state of this Hamiltonian (because \(X(|0\rangle+|1\rangle)=(|0\rangle+|1\rangle)\)). Expanding, we see that this is the state with degenerate probability in each \(w\) bin, which has \(\langle\hat{H}_{0}\rangle=0\).
Finally, we put these two Hamiltonian components, namely \(\hat{H}_{0}\) of Eq. (12) and \(\tilde{H}\) of Eq. (11), into the adiabatic evolution equation in Eq. (7), and evolve the system from the initial \(\hat{H}_{0}\) ground state using the Trotterized circuit generated by qibo. The result for the quartic potential is shown in Fig. 6. As expected, it is highly peaked around the global minimum.
## 4 Neural Network training
### General method
In this section, we will demonstrate that the AQC optimisation algorithm outlined in the previous section can be used to train machine-learning models, where we will now replace the single \(\hat{w}\) operator with a large number of weights and biases. We focus on the supervised learning framework, which aims to find a function \(Y(x)\) that approximately reproduces a given set of outputs \(y_{a}\) from a given set of inputs \(x_{i}\). A classification problem is when the outputs, called labels in that context, take values in a small discrete set. Otherwise, the problem becomes general non-linear regression.
A machine learning model is a family of functions from which the optimal \(Y(x)\) for the available data has to be selected. The process of finding this optimal function is known as training, and it is typically done by minimising a _loss function_\(\mathcal{L}\), which measures the deviation of the predictions \(Y(x_{a})\) from the labels \(y_{a}\). For example, one may define it as the mean squared error
\[\mathcal{L}=\frac{1}{N}\sum_{a=1}^{N}\left(Y(x_{a})-y_{a}\right)^{2}. \tag{13}\]
Some of the most versatile models in this setting are neural networks, which are constructed as the composition of layers \(L_{k}(z)\), with each layer given by an affine transformation followed by the element-wise ap
Figure 5: Energies during the adiabatic evolution in Fig. 4 showing the isolated ground-state energy.
Figure 6: Adiabatically evolving to find the global minimum of the quartic potential using a Pauli-spin encoding of \(w\). Here we show the evolving smoothed histogram of the groundstate \(\rho(w)\equiv|\psi(w)|^{2}\) with \(w\) encoded in \(N=7\) qubits.
plication of a non-linear functions \(f_{k}\):
\[Y(x) =L_{n}(\cdots L_{1}(x)) \tag{14}\] \[L_{k}(z) =f_{k}\left(\sum_{j}w_{ij}^{(k)}z_{j}+b_{i}^{(a)}\right). \tag{15}\]
The parameters \(w\) and \(b\) are known as the weights and biases, and the functions \(f\) are called the activation functions.
Various classical algorithms have been developed to optimise the loss function \(\mathcal{L}\). Most of them are local optimisation methods, in which the weights and biases are updated iteratively in small increments. A common problem these algorithms can only partially address is that they can become trapped in local minima for a non-convex loss function. Thus, quantum algorithms capable of avoiding this problem by directly tunnelling or adiabatically evolving towards the global minimum would work qualitatively differently from classical gradient-based optimisation methods and prevent these problems.
In Section 3, we outlined two general methods for minimising arbitrary functions, the "matrix method" and the "Pauli-spin method". We shall now apply the Pauli-spin method to minimise the loss as a function of the free parameters of the neural network, which are the weights and biases.
To begin with, let us make some general remarks on the advantages and disadvantages of the two methods in the neural-network context. As we saw, the Pauli-spin method enables an efficient representation of the Hamiltonian in terms of Pauli matrices at the price of approximating the function through polynomials. In the context of neural networks, this implies that the activation function must be approximated by a polynomial, such that the loss function becomes a polynomial in spin matrices of degree given by the number of layers and the degree of the activation function. One then only needs to store the non-vanishing coefficients of this polynomial. This can be a significant advantage over matrix encoding. The effects of the polynomial approximation can be made arbitrarily small because any well-behaved activation function can be approximated arbitrarily well by a polynomial in a bounded domain. The range of values of the inputs to each activation is bounded and known in advance, given the range of values of the inputs \(x\) and the binary-encoded parameters \(w\) and \(b\). The downside of the "Pauli-spin method" is that the nested non-polynomial activation functions result in a large gate depth. We will make more quantitative comparisons of the methods later in Subsection 4.4.
### Toy example
For concreteness, we will focus on a toy example, although the method can be used to train any other neural network. Our neural network has two layers, the first mapping 2D points to 2D points and the second mapping 2D points to numbers. We take activation functions to be \(f_{1}(x)=x^{2}\) and \(f_{2}(x)=x\), and the biases \(b_{i}^{(1)}=0\) and \(b^{(2)}=-1\). The output is then
Figure 7: Left: circle dataset and the corresponding decision boundary generated by the most probable final state after adiabatic evolution and measurement. Right: summary of some of the potential outcomes of the final measurement, including the corresponding \(Y(x)=\) constant contours, the probability of measuring each of them, their energy, and the degeneracy (the number of equivalent states that generate the same \(Y(x)\) function).
given by
\[Y(x)=\begin{pmatrix}w_{1}^{(2)}&w_{2}^{(2)}\end{pmatrix}\begin{bmatrix}\begin{pmatrix} w_{11}^{(1)}&w_{12}^{(1)}\end{pmatrix}\begin{pmatrix}x_{1}\\ x_{2}\end{pmatrix}\end{bmatrix}^{2}-1, \tag{16}\]
where the square is to be understood as the element-wise square function applied to a 2-vector. We use the Pauli-spin method, with one qubit per parameter only. This leads to a system with a total of 6 qubits, which allows us to simulate it on a small classical computer using qibo as described in the previous section.
We will use this network to perform a binary classification task, predicting a point to be signal if \(Y(x)\geq 0\) and background otherwise. We therefore call the \(Y(x)=0\) contour the decision boundary. The simple structure we have chosen allows for several shapes of the decision boundary, from which the optimal one is to be selected by the adiabatic computation.
The two datasets we consider are a set of 1000 randomly chosen 2D points, with uniform distribution in the square \([-1,1]\times[-1,1]\). In the first one, which we call the circle dataset, these 2D points are labelled as \(y=1\) (signal) if \(x^{2}+y^{2}>1/2\) and \(y=-1\) (background) otherwise. The optimal decision boundary for the circle dataset is thus the circle \(x^{2}+y^{2}=1/2\), which our toy neural network can achieve. In the second dataset, which we call the band dataset, they are labelled \(y=2\) (signal) with probability given by \(\max[1,(x+y)^{2}]\) and with \(y=-2\) otherwise. We make this choice so that the data is not perfectly separable, but our neural network can achieve the lowest value of the loss function when it generates a decision boundary of \(2(x^{2}+y^{2})=1\)
To train the neural network, we generate the target Hamiltonian \(\hat{H}\) by replacing each weight in the loss function \(\mathcal{L}\) by a \(Z\) Pauli matrix. The initial trivial Hamiltonian \(\hat{H}_{0}\) is given by Eq. (12). We use qibo to simulate the adiabatic time evolution in 10 steps from \(t=0\) to \(t=10\), with a linear schedule \(s(t)=t/10\). The final state consists of a superposition of different computational-basis states. In a real-world device, one would measure all of the \(Z_{\ell}\) to obtain the classical values of the weights in the network. Our simulation shows that the correct contour for the circle dataset, displayed on the left in Fig. 7, is the most likely outcome of this measurement, with a 93% probability. Similarly, the most likely outcome for the band dataset, with 89% probability, is the optimal contour, shown on the left in Fig. 8. In practice, performing a low number of \(\Lambda\)QC runs and selecting the final state with the least energy is a viable strategy.
It should be noted that, like most neural networks, the one we are considering has multiple symmetries because different possible values of the weights give rise to the same function \(Y(x)\). An example of such a symmetry consists of flipping both \(w_{11}^{(1)}\rightarrow-w_{11}^{(1)}\) and \(w_{12}^{(1)}\rightarrow-w_{12}^{(1)}\). Two states related to these symmetries must have the same energy under the target Hamiltonian \(\hat{H}\). On the right side of Figs. 7 and 8, we have collected the total probability of measuring any of the states leading to each of the most likely \(Y(x)\) functions.
Figure 8: left: band dataset and the corresponding decision boundary generated by the most probable final state after adiabatic evolution and measurement. Right: summary of some of the potential outcomes of the final measurement, including the corresponding \(Y(x)=\) constant contours, the probability of measuring each of them, their energy, and the degeneracy (the number of equivalent states that generate the same \(Y(x)\) function).
One of the consequences of these symmetries is that the minima of the loss function are degenerate, and therefore we are morally in the degenerate minima situation of Section 3. In the classical setting, the random initial seed of the optimization algorithm would select one of the degenerate minima. However, guided by the discussion in section. 3 it is clear that quantum training leads to a different situation, in which the final quantum state is in a superposition of the degenerate minima, all of which have equal probability. It is thus the final measurement that plays the role of randomly selecting one of the minima. Moreover, it is clear that, generally, one cannot take many measurements and use the expectation values of the weights for the classical values because this would incorrectly average over these degenerate possibilities.
### Binary neural networks
The limited number of qubits available in current quantum computers makes it more interesting to consider them for training smaller machine-learning models. A valuable class of such models, with many real-world applications, are binary neural networks [24, 25]. These are neural networks in which the weights can only take the values \(0\) or \(1\), the biases are set to \(0\), and the activation functions are given by
\[f\left(\sum_{j=1}^{n}w_{ij}x_{j}\right) = \Theta\left(\sum_{j=1}^{n}w_{ij}x_{j}-\frac{n}{2}\right), \tag{17}\]
where the \(w_{ij}\) and the \(x_{j}\) are the weights and inputs of the corresponding layer, and \(\Theta\) is the Heaviside step function. The \(i\)th output of each layer is \(1\) if at least half of the terms \(w_{ij}x_{j}\) are \(1\), and zero otherwise.
Binary neural networks can be directly encoded in quantum computers without any loss of expressiveness that we encountered with a polynomial approximation of activation functions and with the discretisation of continuous weights. The model trained in a quantum device can be _exactly the same_ as the one implemented in a classical computer. This can be easily seen by noting that the binary \(0/1\) weights can be encoded using the binary \(T_{\ell}\) operators constructed from Pauli \(Z_{\ell}\) matrices via Eq. (6). The activation functions can be viewed as polynomials in the \(T_{\ell}\)'s, through the identity
\[f\left(T_{i}\right)=\sum_{m=0}^{\lfloor n/2\rfloor}\sum_{i_{1}<\ldots<i_{m}} \prod_{j\neq i_{1},\ldots,i_{m}}T_{j}\prod_{k=i_{1},\ldots,i_{m}}\bar{T}_{k}, \tag{18}\]
where \(\bar{T}=1-T\). For example
\[f(T_{1},T_{2},T_{3}) = T_{1}T_{2}T_{3}+T_{1}T_{2}\bar{T}_{3} \tag{19}\] \[\quad+\,T_{1}\bar{T}_{2}T_{3}+\bar{T}_{1}T_{2}T_{3}\.\]
The discrete nature of binary neural networks makes them even more difficult to train with conventional classical methods, which are, as we have seen, typically based on gradient descent. Adiabatic quantum training completely avoids this issue, as it can be done using the same procedure we outlined above for quasi-continuous neural networks.
Since the outputs are binary (and assuming that the labels \(y\) are binary as well), one can use a simpler linear loss function,
\[\mathcal{L} = \sum_{a}(-1)^{y_{a}}Y(x_{a}). \tag{20}\]
With such a loss function those points \(x_{a}\) with either label, \(y_{a}=0\) or \(y_{a}=1\), are penalised by one unit in the loss function if there is an incorrect prediction, \(Y(x_{a})\neq y_{a}\). (That is \(\{y_{a},Y\}=\{0,1\}\) is incorrect and contributes \(\Delta\mathcal{L}=1\) versus \(\{y_{a},Y\}=\{0,0\}\) which contributes \(\Delta\mathcal{L}=0\). Likewise \(\{y_{a},Y\}=\{1,1\}\) is correct and contributes \(\Delta\mathcal{L}=-1\) versus \(\{y_{a},Y\}=\{1,0\}\) which contributes \(\Delta\mathcal{L}=0\).)
Figure 9: Dataset and predictions from an adiabatically-trained binary neural network. The predictions are generated using the weights determined by the most likely outcome after the final measurement.
To test this approach, we prepare a dataset of images with \(2\times 2\) binary pixels, labelling them with \(y=1\) (signal) if there are two pixels set to 1, one directly above the other, and \(y=0\) (background) otherwise. We select seven signal and seven background samples to balance the dataset. We then split the dataset into 5 (signal) + 5 (background) training images to be included in the loss function and \(2\,+\,2\) test images to check the generalisation properties of the trained model. The selection and splitting are done randomly from the 16 possible binary images. The resulting train/test datasets are displayed in Fig. 9.
For the binary neural network, we choose one with two layers, with the first having four inputs and two outputs and the second having two inputs and one output. The total number of weights, which are in one-to-one correspondence with the qubits, is 10.
To train the network we use qibo to simulate an adiabatic computation as in the previous section, with \(\hat{H}\) now determined by substituting into the loss function \(\mathcal{L}\) the expression for the weights in terms of qubits, and the polynomial representation of the step function in Eq. (18). The predictions generated by the most likely weights after the final measurement are shown in Fig. 9. They are 100% accurate in both the training and the test sets. The probability of obtaining these perfectly accurate weights in the final measurement is 18%. To assess the efficiency of the training this can be compared with the portion of the space of weights that generates such predictions, which is 0.2%.
Since the best values of the weights are obtained with the highest probability but not with certainty, it is profitable to perform several runs of the adiabatic computation and select the one that results in the highest accuracy in the training set. In Fig. 10, we show how the accuracy of the trained network on both the training and the test sets improves with the number of runs. To obtain it, we generate a pool of 1000 sets of trained weights, with distribution given by the final state of the adiabatic evolution, before the final measurement. For each value of the number of runs \(n\) shown in the Fig. 10, we select \(n\) sets of weights from the pool and pick the maximum accuracy. This process is repeated 1000 times, and the average and standard deviation of the resulting accuracies are displayed in the figure.
We compare it with the accuracy of a classical analogue trained using the Adam gradient descent algorithm. To construct this analogue, we replace the step functions with sigmoids, replace the binary weights with continuous ones, and add a penalty term to the loss function of the form \(w^{2}(w-1)^{2}\) for every weight. The effect of this penalty term is to drive the weights to 0 or 1 values. The classical values displayed in Fig. 10 correspond to the same process as for the quantum ones described above, using a pool of 1000 values generated through 1000 classical training runs.
The quantum training exhibits a better performance and generalisation, with the accuracy in both the training and the test sets quickly approaching 100%; while the classical training tends to get stuck in local minima that lead to accuracies of around 80% in the training set, with lower ones in the test set, indicating poor generalisation.
### Comparative estimates of scaling
The different approaches to encoding the loss function presented here incur different computational costs in calculating the target Hamiltonian \(\hat{H}\) and other gate complexities in the quantum circuit that implements
Figure 10: Binary neural network accuracies in the training (left) and test (right) sets from the weights generated by running the training several times and selecting those with the best performance in the training set as a function of the number of runs. The central lines indicate the average accuracy value, and the bands show the 1-standard deviation interval (both computed by repeating the process 1000 times for each value of the number of runs).
the adiabatic time evolution.
It is worth comparing the different approaches to see how they scale with meta-parameters, e.g. number of hidden layers, total number of qubits and so forth. To do this we assume that \(\hat{H}\) is decomposed as a polynomial in Pauli matrices to encode in a time-evolution circuit. The number of gates in the circuit will then be bounded from above by a quantity proportional to the number of terms \(T\) in this polynomial, multiplied by its degree \(D\).
A Hamiltonian for a system \(N_{q}\) qubits is in general an \(2^{N_{q}}\times 2^{N_{q}}\) matrix. Thus, for the generic "matrix approach", one needs to compute \(2^{2N_{q}}\) quantities in the preparation stage of the calculation. The decomposition of \(\hat{H}\) in terms of Pauli matrices will thus require \(2^{2N_{q}}\) matrix multiplication and trace operations. Finally, the resulting polynomial in Pauli matrices will have degree \(D=2^{2N_{q}}\) and roughly \(T=2^{2N_{q}}\) terms, so the number of gates scales roughly as \(2^{4N_{q}}\). However, the loss functions of neural networks typically lead to a very sparse \(\hat{H}\), so there is room for significant improvement on these scalings.
Using the "Pauli-spin method" is one possible strategy to take advantage of sparsity. The maximum number of terms in \(\hat{H}\) is then several chains of length \(2^{N_{q}}\) of identity or Pauli \(Z\) matrices. One needs to compute the coefficient to each of these chains in \(\hat{H}\), so the maximum number of quantities to compute in this approach is a factor \(2^{N_{q}}\) smaller than in the general case. In practice, this number might be much smaller. Moreover, these quantities are computed directly by replacing the binary encoding of the weights with the loss function, with no need for decomposition of \(\hat{H}\) into a basis involving matrix multiplications and traces. The degree of the \(\hat{H}\) polynomial is, in this case, independent of the number of qubits and increases with the number of layers of the network, but not with the number of weights per layer.
Thus the scaling of both \(T\) and \(D\) improves significantly for relatively shallow networks in the Pauli-spin approach. To simplify the discussion, we consider a neural network with \(L\) layers, all having a polynomial activation with degree \(d\), and an \(M\times M\) matrix of weights with no biases. The number of terms in the \(\hat{H}\) polynomial is then
\[T\lesssim M^{d^{L}}. \tag{21}\]
This can be shown by induction on \(L\). For a network with a single layer, \(L=1\), the number of terms is bounded by the number of terms in a degree-\(d\) polynomial in \(M\) variables:
\[T<\binom{d+M}{d}\stackrel{{ M\rightarrow\infty}}{{\sim}}M^{d}. \tag{22}\]
Similarly, adding a layer to an \((L-1)\)-layer network gives several terms bounded by the number of terms in a degree-\(d\) polynomial in variables that are degree-\(M^{d^{L-1}}\) polynomials themselves This equation shows that the number of terms, and thus the gate complexity, is polynomial in \(M\) in this approach. On the other hand, the scaling with the number of layers is much worse: it is doubly exponential. It will thus quickly saturate the generic bound for Pauli-spin encoded functions of \(2^{N_{q}}\), so the latter is the stronger one for deep neural networks.
In the case of binary neural networks with step-function activations, the degree of the activation polynomials is \(d=M\). Thus, the advantages over classical algorithms provided by their quantum training are obtained at the price of an \(M^{M}\) scaling of the number of terms with the number of weights per layer. A potential source for improvement on this front is binary activations with a lower degree or a lower number of terms. An example of such an activation would be one that required all inputs of the layer to be 1 for it to be 1, otherwise being 0.
## 5 Conclusions
Neural networks are ubiquitous optimisation tools in science and everyday tasks. The most time and resource-consuming part of their design is the training process. In this study, we have demonstrated the potential of Adiabatic Quantum Computing as a powerful tool for training neural networks. Our work addresses the computational challenges encountered when classically training NNs. We have demonstrated that AQC can effectively be implemented on gate quantum computers to train neural networks with continuous and discrete weights, as well as so-called binary networks. Our findings indicate that AQC offers a robust and efficient approach to finding the global minimum of the loss function, thereby optimising the NN. It is then possible to extract the optimally trained network parameters for deployment as a classical neural network.
The proposed methodology involving the "matrix method" and the "Pauli-spin method" effectively encodes and solves this optimisation problem. As we leveraged the qibo package to facilitate fast and accurate quantum circuit evaluation, our approach is scalable and practical for near-term quantum devices.
Compared to previous quantum approaches, which were based on quantum annealing using a transverse Ising model Hamiltonian, the AQC approach that we have proposed in this paper enhances the expressivity of the trained neural networks and expands the applicability of quantum training methods to the gate quantum computing paradigm.
Extending this methodology to more complex neural network architectures and loss functions would be of interest in expanding its applicability to broader classes of problems. Thus, this approach opens up new avenues for harnessing the computational prowess of quantum computation in the realm of machine
learning, particularly in the training of neural networks.
**Acknowledgements:** We would like to thank Luca Nutzicati for helpful discussions and Stefano Carrazza and Matteo Robbiati for help with qibo. S.A. and M.S. are supported by the STFC under grant ST/P001246/1. J.C.C. is supported by the Spanish Ministry of Science and Innovation, under the Ramon y Cajal program.
|
2303.02243 | Neural Operator Learning for Long-Time Integration in Dynamical Systems
with Recurrent Neural Networks | Deep neural networks are an attractive alternative for simulating complex
dynamical systems, as in comparison to traditional scientific computing
methods, they offer reduced computational costs during inference and can be
trained directly from observational data. Existing methods, however, cannot
extrapolate accurately and are prone to error accumulation in long-time
integration. Herein, we address this issue by combining neural operators with
recurrent neural networks, learning the operator mapping, while offering a
recurrent structure to capture temporal dependencies. The integrated framework
is shown to stabilize the solution and reduce error accumulation for both
interpolation and extrapolation of the Korteweg-de Vries equation. | Katarzyna Michałowska, Somdatta Goswami, George Em Karniadakis, Signe Riemer-Sørensen | 2023-03-03T22:19:23Z | http://arxiv.org/abs/2303.02243v3 | Neural Operator Learning for Long-Time Integration in Dynamical Systems with Recurrent Neural Networks
###### Abstract
Deep neural networks are an attractive alternative for simulating complex dynamical systems, as in comparison to traditional scientific computing methods, they offer reduced computational costs during inference and can be trained directly from observational data. Existing methods, however, cannot extrapolate accurately and are prone to error accumulation in long-time integration. Herein, we address this issue by combining neural operators with recurrent neural networks to construct a novel and effective architecture, resulting in superior accuracy compared to the state-of-the-art. The new hybrid model is based on operator learning while offering a recurrent structure to capture temporal dependencies. The integrated framework is shown to stabilize the solution and reduce error accumulation for both interpolation and extrapolation of the Korteweg-de Vries equation.
Machine Learning, ICML
## 1 Introduction
Dynamical systems modelling is formally concerned with the analysis, forecasting, and understanding of the behavior of ordinary or partial differential equation (ODE/PDE) systems or similar iterative mappings that represent the evolution of a system's state. Modern machine learning methods have opened up a new area for building fast emulators for solving parametric ODEs and PDEs. One class of such frameworks are neural operators, which have been gaining popularity as surrogate models for dynamical systems in recent years (Lu et al., 2021; Goswami et al., 2022; ). While previous research on neural architectures has primarily focused on learning mappings between finite-dimensional Euclidean spaces (Raissi et al., 2019; Samaniego et al., 2020), neural operators learn mappings between infinite-dimensional Banach spaces (Goswami et al., 2022).
The two popular neural operators which have shown promising results so far are the deep operator network (DeepONet) (Lu et al., 2021) introduced in 2019 and the Fourier Neural operator (FNO) (Li et al., 2020) introduced in 2020. Both operator networks have been applied to solving complex problems for real-world applications in diverse scientific domains, including medicine (Goswami et al., 2022), physics (Lin et al., 2021; Wen et al., 2022), climate (Kissas et al., 2022; Bora et al., 2023; Pathak et al., 2022) and materials science (Goswami et al., 2022; You et al., 2022; Rashid et al., 2022). Regardless of the impressive results presented in these works, the problem of approximating system's behavior over a long-time horizon employing neural operators remains underexplored (Goswami et al., 2023; Wang and Perdikaris, 2023).
To address the issues of operator learning over a long-time horizon, some recent works suggest employing physics-informed DeepONets (Wang and Perdikaris, 2023), using transfer learning and later fine-tuning the pre-trained DeepONet with sparse measurements in the extrapolated zone or with a PDE loss (Zhu et al., 2022), and a hybrid inference approach, integrating neural operators with high-fidelity solvers (Oommen et al., 2022). These approaches, however, require either the knowledge of the underlying PDE or sparse labels in extrapolation.
In this work, we introduce a new framework to handle the challenges of long-time integration. The framework employs a neural operator that learns the system's state representation at any timestep and a recurrent neural network (RNN) that is placed after the architecture of the neural operator to learn temporal dependencies between the states. This extension of a neural operator can help increase the forecast accuracy over a long-time prediction horizon, since RNNs preserve a hidden state that carries information from the previous timesteps forward, thus enabling the model to capture and utilize temporal dependencies.
We demonstrate the performance of the proposed framework on the non-linear Korteweg-de Vries (KdV) equation and observe a reduction in the error accumulation over time, leading to a more accurate and stable prediction. In our tests, we employ three RNN architectures: standard RNN, long short-term memory (LSTM), and gated recurrent units
(GRU). Furthermore, we carry out an analysis to show the benefits and challenges of training the two components of the proposed framework separately (in two-step training) and simultaneously. While combining the operator architecture with the recurrent network takes away the property of resolution invariance of the operators, it can be taken advantage of in operator learning when only limited grid data is available. The variations of the framework are tested on interpolation and extrapolation tasks, in both cases resulting in more accurate and stable long-time predictions than their vanilla counterparts.
## 2 Recurrent networks with neural operators
The proposed architecture consists of a deep neural operator followed by a RNN and a feed-forward layer which brings the output back to the dimension of the solution. In this work, we have considered the two most studied neural operators: the deep operator network (DeepONet) and the Fourier neural operator (FNO), combined with one of the three most successful RNNs: the traditional recurrent neural network (non-gated), long short term memory (LSTM), and gated recurrent units (GRU). The architecture is set up such that the outputs of the neural operator are fed as inputs of the RNN. The neural operator learns the solution operator, while the RNN processes the outputs of the neural operator as temporal sequences. A schematic representation of the proposed approach is shown in Figure 1.
The aim of the neural operator is to learn the mapping between two infinite-dimensional functional spaces, where we learn the evolution of a dynamical system from a given initial condition. Such operator mapping can be represented as:
\[\mathcal{N}:u(x,t=0)\rightarrow[u(x,t_{1}),...,u(x,t_{n})],\]
where \(n\) is the total number of temporal discretization points that defines the full trajectory and \(\mathcal{N}\) is the nonlinear operator that defines the PDE. The RNN is employed to identify the outputs of the operator as a sequence. The work is motivated by the fact that in feed-forward neural networks (including neural operators) all the outputs are independent, which is not true for sequence learning. Recurrent networks address this issue through feedback loops and hidden states which allow for the information to be passed forward and capture the temporal dependencies between the outputs.
### Neural operator learning
Neural operators learn nonlinear mappings between infinite-dimensional functional spaces on bounded domains, providing a unique simulation framework for the real-time prediction of multi-dimensional complex dynamics. Once trained, such models are discretization invariant, which means they share the same network parameters across different parameterizations of the underlying functional data. In this study, we consider the performance of two neural operators, DeepONet proposed in (Lu et al., 2021) and FNO proposed in (Li et al., 2020) with our novel recurrent-networks-integrated neural operator architecture (Figure 1).
**FNO**: The Fourier Neural Operator is based on Green's theorem, which involves parameterizing the integral kernel in the Fourier space. The input to the network is elevated to a higher dimension, then passed through several Fourier layers before being projected back to the original dimension with a feed-forward layer. Each Fourier layer involves a forward fast Fourier transform (FFT), followed by a linear transformation of the low-Fourier modes and then an inverse FFT. Finally, the output is added to a weight matrix, and the sum is passed through an activation function to introduce nonlinearity. Different variants of FNO have been proposed based on the pre-decided dimensions to replace the integral kernel with the convolution operator defined in Fourier space. A numerical problem that is 1D in space and 1D in time could be handled using either FNO-2D, which employs Fourier convolutions through space and time to learn the dynamics directly over multiple timesteps, given an initial condition, or using FNO-1D, which performs Fourier convolution in space and uses a recurrent time-marching approach to propagate the solution in time. In this work, we employ FNO-2D, since time-marching approaches are known to be prone to error accumulation in long time series prediction.
**DeepONet**: The deep operator network is based on the universal approximation theorem for operators (Chen and Chen, 1995) and employs two deep neural networks (branch and trunk network) to learn a family of PDEs and provide a discretization-invariant emulator, which allows for fast inference and low generalization error (Lanthaler et al., 2022). The branch network encodes the input function (the initial or boundary conditions, constant or variable coefficients, source terms) at fixed sensor points while the trunk network encodes the information related to the spatio-temporal coordinates of the output function. The output embeddings of the branch and the trunk networks are multiplied element-wise and summed over the neurons in the last layer of the networks (Einstein summation). Once the network is trained, it can be employed to interpolate the solution at any spatial and temporal location within the domain.
### The choice of the recurrent neural network
Feed-forward neural networks lack explicit mechanisms to learn dependencies between outputs, which is a fundamental problem when learning temporal sequences. Recurrent neural networks are an extension over traditional neural networks designed to improve sequence learning by preserving a hidden state which carries information from the previous time steps. During training, this hidden state is
updated with the current input and the previous hidden state. Another way of learning temporal sequences is through autoregressive methods, which recursively update the input function using the outputs obtained at the previous time steps. This approach, however, is known to result in high error accumulation in long-term time series forecasting.
Since traditional RNNs are prone to vanishing gradients in long sequences, _i.e._, the gradients used in training approach zero as they are multiplied with each time step to adjust the weights. Therefore, we also investigate the proposed architecture when combined with GRU and LSTM. GRUs and LSTMs address the problem of vanishing gradients by introducing a set of gates, implemented as Sigmoid functions, that allow or block the flow of long-term information through the network. Since the name RNN can refer to both the whole class of recurrent neural networks, as well as the specific architecture, we further use the term _simple RNN_ when referring to non-gated recurrent architecture and _RNN_ for all types of recurrent neural networks, including simple RNN, GRU, and LSTM.
### Simultaneous vs. two-step training
We distinguish two modes of training the networks in the proposed architecture: simultaneous training, in which at each training step the weights of both networks are updated in a single backward pass, and two-step training, where we first train the neural operator network and then the RNN.
The two-step approach allows to take advantage of the resolution-invariance property of neural operators in the training. While the RNNs require evenly spaced data, the operator alone can be trained in a discretization-invariant manner, _e.g._, to increase the number of training samples when multi-resolution data is available. The neural operator is trained until its validation error does not change substantially in consecutive epochs and is then used in inference to produce the training data for the RNN. The RNN learns a mapping between the outputs of the neural operator and the ground truth, in a manner equivalent to freezing the weights of the neural operator network and continuing the training. However, this approach can also cause additional error accumulation since the approximation error in the neural operator layers cannot be reduced at the point of training the RNN, which is not the case in the simultaneous training mode.
In the remainder of the paper, we refer to the architectures trained in the simultaneous mode as DON-RNN and FNO-RNN, and as DON+RNN, FNO+RNN, and equivalent to the architectures trained in the two-step mode.
Figure 1: The proposed architecture. The deep neural operator (e.g., DeepONet or FNO) maps the solution operator from the initial condition \(u_{t=0}\) to solutions at later timesteps with a fixed time interval \(\Delta t\). These solutions are then represented in a temporally sequential manner, _i.e._, reshaped to [batch_size, nr_timestamps, solution_size], and processed by one of the RNNs. The chosen RNN lifts the dimension to the specified number of neurons and returns all hidden states. The last layer is a fully-connected shallow neural network bringing the embedding to the original size of the solution.
## 3 Results and discussion
### Problem statement and data generation
To evaluate the performance of the proposed architecture, we consider the Korteweg-de Vries (KdV) equation defined in Equation 1, which is a nonlinear dispersive PDE that describes the evolution of small-amplitude, long-wavelength system in a variety of physical systems, such as shallow water waves, ion-acoustic waves in plasmas, and certain types of nonlinear optical waves.
\[u_{t}-\eta uu_{x}+\gamma u_{xxx}=0, \tag{1}\]
where \(u\) is the amplitude of the wave, \(\eta\) and \(\gamma\) are chosen real-valued scalar parameters, \(x\) is a spatial and \(t\) is the time dimension. The subscripts denote partial derivatives with respect to that variable.
The chosen initial condition, \(u(x,0)\) is a sum of two solitons (solitary waves), _i.e._, \(u=u_{1}+u_{2}\). A single soliton is expressed as:
\[u_{i}(x,0)=2k_{i}^{2}\text{sech}^{2}\left(k_{i}((x+\frac{P}{2}-Pd_{i})\%P- \frac{P}{2})\right)^{2}, \tag{2}\]
where \(\text{sech}\) stands for hyperbolic secant (\(\frac{1}{\text{cosh}}\)), \(P\) is the period in space, \(\%\) is a modulo operator, \(i=\{1,2\}\), \(k\in[0.5,1.0]\) and \(d\in[0,1]\) are coefficients that determine the height and location of the peak of a soliton, respectively.
To train the recurrent-networks-integrated neural operator, we generated \(N=5\),\(000\) initial conditions, \(u(x,t=0)\), and the evolution dynamics was modeled using the midpoint method. The simulation takes place on a 1D domain \(\Omega=[0,10]\) discretized with \(50\) uniformly spaced grid poi
Figure 3: E2: Extrapolation performance. The models are trained on partial trajectories (\(50\) steps in \(t\in[0,1.25]\)) and tested on full trajectories (\(200\) steps in \(t\in[0,5]\)). The error is given as the MAE over all spatial points \(x\) for each time step \(t\) on the test data. The enhanced models are compared to vanilla neural operators.
Figure 2: E1: Interpolation performance. The models are both trained and tested on full trajectories (\(200\) steps in \(t\in[0,5]\)). The error is given as the MAE over all spatial points \(x\) for each time step \(t\) on the test data. The enhanced models are compared to vanilla neural operators.
(\(\Delta x=0.2\)) in the time interval \(t=[0,5]\) for \(\Delta t=0.025\), thereby discretizing the temporal domain into \(201\) points. To form the training data set, we collect solution snapshots \(u(x,t)\) at \(n_{t}=200\).
The generated labeled dataset with \(N\) unique realizations is split into training and testing sets such that the number of training samples, \(N_{train}=0.9\times N\) and the number of testing samples, \(N_{test}=0.1\times N\). Furthermore, \(10\)% of the training set is used explicitly for validation during the training process. The detailed architectures of the DeepONet and FNO in this study are discussed in the Appendix B.
To test the proposed framework, we carried out the following two experiments:
* **E1:** Training and testing on the mapping \(\mathcal{N}:u(x,t=0)\to u(x,t)\), where \(t\in[0.025,5]\), where \(\mathcal{N}\) denotes the integrated model.
* **E2:** Training to learn the mapping \(\mathcal{N}:u(x,t=0)\to u(x,t)\), where \(t\in[0,1.25]\)s, and testing on \(t\in[1.25,5]\) by recursively updating the the initial condition.
For each of the experiments, we have considered the two training processes discussed in subsection 2.3, and have carried out an analysis for the combination of two neural operators (DeepONet and FNO) and recurrent networks (simple RNN, LSTM, and GRU). Furthermore, we have investigated the improvement in the accuracy of the proposed framework against the accuracy of vanilla neural operators.
### E1: Interpolation performance
The aim of this experiment is to learn the dynamics of the system defined over a time \(t\in[0.025,5]\), which is discretized with \(200\) temporal points from the given initial condition. The models are trained and tested on complete trajectories (\(200\) steps in time \(t\in[0.025,5]\)). In Table 1, we show the mean absolute error (MAE), root mean square error (RMSE) and the relative squared error (RSE) on the \(N_{test}\) test cases for a total of \(10\) different architectures. Overall, the integration of recurrent networks with neural operators improves the accuracy and stability of prediction for all architectures. The error accumulation over time for all the architectures is presented in Figure 2.
In particular, for both neural operators, the lowest errors in the two-step training are obtained by the LSTM and GRU extensions (see Table 1). Furthermore, the growth rate of the error is significantly reduced after integrating the recurrent networks with neural operators (see Figure 2a). The simultaneous training mode shows an advantage over the two-step training for both operators (see Figure 2b) as the accumulated error is drastically reduced, which eventually reduces the slope of the error growth. We report a slight increase in the overall error of the simultaneous training of the RNN extension of the FNO architecture compared to the two-step training as seen in Table 1. However, this increase is due to the high error in the first few time steps (Figure 2b). A slight increase in error is also observed in the RNN extension of DeepONet when trained simultaneously. Since the high error followed by its decrease in the very few initial timesteps is observed in all trained models, we suspect that the mechanism causing it is independent of the model.
For illustration purposes, the prediction plots for all the architectures are shown for four time snapshots \(t=\{1.25,2.5,3.75,5\}\) from a given initial condition in Figure 4. Vanilla DeepONet introduces wiggles to the solution already at time \(t=2.5\), and it can be seen that with the RNN extensions, these artifacts become less pronounced, to the point where they disappear completely when the Deep
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Two-step training** & MAE & RMSE & RSE \\ \hline DeepONet & 0.035 & 0.105 & 0.034 \\ DON+RNN & 0.026 & 0.093 & 0.026 \\ DON+GRU & 0.019 & 0.083 & 0.021 \\ DON+LSTM & 0.019 & 0.085 & 0.022 \\ \hline FNO & 0.027 & 0.052 & 0.008 \\ FNO+RNN & 0.020 & 0.032 & 0.003 \\ FNO+GRU & 0.012 & 0.020 & **0.001** \\ FNO+LSTM & **0.010** & **0.018** & **0.001** \\ \hline
**Simultaneous training** & MAE & RMSE & RSE \\ \hline
**Simultaneous training** & MAE & RMSE & RSE \\ \hline DON-RNN & 0.020 & 0.032 & 0.003 \\ FNO-RNN & 0.023 & 0.036 & 0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 1: E1: Interpolation. Performance of models trained and tested on full trajectories (\(t\in[0,5]\)).
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Two-step training** & MAE & RMSE & RSE \\ \hline DeepONet & 0.043 & 0.131 & 0.052 \\ DON+RNN & 0.046 & 0.141 & 0.061 \\ DON+GRU & 0.042 & 0.125 & 0.047 \\ DON+LSTM & 0.041 & 0.125 & 0.047 \\ \hline FNO & 0.089 & 0.215 & 0.141 \\ FNO+RNN & 0.064 & 0.172 & 0.090 \\ FNO+GRU & 0.036 & 0.112 & 0.038 \\ FNO+LSTM & **0.035** & **0.100** & **0.030** \\ \hline
**Simultaneous training** & MAE & RMSE & RSE \\ \hline DON-RNN & 0.039 & 0.110 & 0.036 \\ FNO-RNN & 0.090 & 0.243 & 0.179 \\ \hline \hline \end{tabular}
\end{table}
Table 2: E2: Extrapolation. Performance of models trained on partial trajectories (50 steps \(t\in[0,1.25]\)), tested on the full trajectories in a recursive manner (\(200\) steps in \(t\in[0,5]\)), where the output of the last prediction step is the input for the next prediction step).
Figure 4: E1: Interpolation performance on one representative sample for all models trained on \(t\in[0,5]\). The full trajectory \(t\in[0.025,5]\) is predicted in one shot. Each subplot shows predictions (red dashed line) and the ground truth (blue solid line) row-wise at \(t=\{1.25,2.5,3.75,5\}\) and column-wise for all models: vanilla DeepONet, DeepONet+RNN, GRU, LSTM trained in two steps, vanilla FNO, FNO+RNN, GRU, LSTM trained in two steps, and simultaneously trained DON-RNN and FNO-RNN.
Figure 5: E2: Extrapolation performance on one representative sample for all models trained on \(t\in[0,1.25]\). The full trajectory \(t\in[0.025,5]\) is covered recursively as short-interval one-shot predictions, _i.e_., as \(t=0\to t\in[0.025,1.25]\), \(t=1.25\to t\in[1.275,2.5]\), up to \(t=5\), where the last predicted solution is used as the initial condition for the next one-shot prediction over the interval \(\Delta t=1.25\). Each subplot shows predictions (red dashed line) and the ground truth (blue solid line) for the last solution (the next initial condition) row-wise at \(t=\{1.25,2.5,3.75,5\}\) and column-wise for all models: vanilla DeepONet, DeepONet+RNN, GRU, LSTM trained in two steps, vanilla FNO, FNO+RNN, GRU, LSTM trained in two steps, and simultaneously trained DON-RNN and FNO-RNN.
ONet is trained simultaneously with a simple RNN. FNO does not seem to suffer from similar instabilities, although it is noticeable that the model does not fit well at the wave peaks in later timesteps. This problem is not visible in the predictions by the extended FNOs.
### E2: Extrapolation performance
This experiment aims to test the performance of the integrated framework for extrapolation. To that end, the models are trained to lean the mapping \(\mathcal{N}:u(x,t=0)\to u(x,t)\), where \(t\in[0.025,1.25]\) is discretized to \(50\) temporal points. The model is tested to predict the dynamics over time \(t\in[0,5]\) for a given initial condition. During testing, the prediction is performed in a recursive manner, where \(50\) time steps are predicted in one shot, and in each iteration, the last predicted observation becomes an input (initial condition) for the next \(50\) time steps.
The error metrics for all the architectures are presented in Table 2. Similarly to the results obtained in the experiment E1, we observe lower error for the RNN extension of the neural operator in all cases, except for the simultaneously trained FNO-RNN architecture (Table 2). Furthermore, gated recurrent networks like GRU and LSTM show advantages over simple RNN. Figure 3 shows that all models suffer from significant error accumulation.
In all cases, the neural operators integrated with recurrent networks trained in a two-step procedure show lower error accumulation over time, especially for GRU and LSTM (Figure 2(a)). Significant improvement is observed in the accuracy of FNO integrated with the recurrent networks, where the slope of the error growth is also reduced. However, we observe that in temporal extrapolation, the recurrent integrated architecture of DeepONet performs similarly to the vanilla DeepONet in the first two iterations, and accumulates larger error after the third iteration. For simultaneous training (Figure 2(b)), the DON-RNN architecture has lower error and slower growth than the vanilla DeepOnet in the last iteration, regardless of a pronounced error increase at the beginning of each iteration. Furthermore, the FNO-RNN architecture shows similar performance to the vanilla FNO.
To investigate further, we plot the temporal evolution of a representative sample for four time snapshots \(t=\{1.25,2.5,3.75,5\}\) in Figure 5. Overall, it is observed that the models struggle to maintain the wave shape in extrapolation. Each row shows the last time step in one iteration of the recursive prediction. While at \(t=1.25\) all models fit the data, at the last time step of the second iteration (\(t=2.5\)) the peaks of the waves are underestimated, with an exception of simultaneous training of the recurrent network and DeepONet. At \(t=3.75\) the shape of the wave is significantly perturbed for all models but for the two-step training of FNO with GRU and LSTM and the simultaneous training of DeepOnet with RNN. Finally, at \(t=5.0\) the shape remains partially captured for these models, but with an offset in the spatial dimension, while the remaining models have lost all resemblance to the ground truth.
## 4 Conclusions
While neural methods have been shown to be promising surrogate models to accelerate the simulation of dynamical systems, integrating these systems over long-time horizons still remains an open challenge. Some recent works propose to address this problem by employing physics-informed deep operator networks, transfer learning, or hybrid approaches, however, these methods suffer from their own limitations.
In this work, we have proposed a new approach to tackle long-time horizon prediction, that combines two existing and successful architectures: neural operators and recurrent neural networks. The resulting model shows improvement in terms of accuracy and stability of prediction.
We have illustrated the advantage of this new architecture for long-time integration by comparing it to the vanilla form of two neural operators, DeepONet and FNO, on the Korteweg-de Vries equation, which is a nonlinear equation for modeling shallow water waves.
Our observations from this study can be summarized as follows:
1. The proposed extension of the neural operators reduces the overall error in both interpolation and extrapolation experiments when compared to vanilla models, as well as the error accumulation over the whole trajectory for all models except for the extrapolation on simultaneously trained FNO-RNN (Tables 1 and 2, Figures 2 and 3).
2. The error growth rate is also reduced for most of the models. In the interpolation tests, the accumulated error practically flattens out for all extensions of neural operators, with the exception of the models trained with a DeepONet in the two-step process, where the error is lower, but still shows significant accumulation (Figure 2). In extrapolation, the reduction in the error growth rate is significant for the FNO trained in a two-step process, as well as for the simultaneously trained DON-RNN (Figure 2(a)).
3. The proposed architecture is shown to improve the ability of the operator to maintain the shape of the solution, which prevents some of the error propagation in long-time integration (Figures 4, 5). The stability is more pronounced when the components of the integrated framework are trained simultaneously.
Despite the promising results presented here, we must ac
knowledge that using the RNN-integrated neural operator architecture to solve long-term prediction problems is still in its infancy. Numerous open issues should be taken into account as potential future study topics. It is crucial from a theoretical standpoint to gain a better understanding of how approximation errors impact the stability and precision of the suggested methods. This is a key component in critical applications where accuracy and convergence guarantees are required, where conventional numerical solvers are still the default option at the moment.
## 5 Acknowledgement
For KM and SRS, this work is based upon the support from the Research Council of Norway under project SFI NorwAI 309834. Furthermore, KM is also supported by the PhD project grant 323338 Stipendiatstilling 17 SINTEF (2021-2023). For SG and GEK, this work was supported by the U.S. Department of Energy, Advanced Scientific Computing Research program, under the Scalable, Efficient and Accelerated Causal Reasoning Operators, Graphs and Spikes for Earth and Embedded Systems (SEA-CROGS) project, DE- SC0023191. Furthermore, the authors would like to acknowledge the computing support provided by the computational resources and services at the Center for Computation and Visualization (CCV), Brown University where all the experiments were carried out.
|
Subsets and Splits